00:00:00.000 Started by upstream project "autotest-per-patch" build number 132317 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.056 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.056 The recommended git tool is: git 00:00:00.056 using credential 00000000-0000-0000-0000-000000000002 00:00:00.060 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.108 Fetching changes from the remote Git repository 00:00:00.112 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.191 Using shallow fetch with depth 1 00:00:00.191 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.191 > git --version # timeout=10 00:00:00.250 > git --version # 'git version 2.39.2' 00:00:00.250 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.302 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.302 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.193 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.208 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.221 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.221 > git config core.sparsecheckout # timeout=10 00:00:06.236 > git read-tree -mu HEAD # timeout=10 00:00:06.251 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.273 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.273 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.401 [Pipeline] Start of Pipeline 00:00:06.415 [Pipeline] library 00:00:06.417 Loading library shm_lib@master 00:00:06.417 Library shm_lib@master is cached. Copying from home. 00:00:06.434 [Pipeline] node 00:00:06.442 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:06.443 [Pipeline] { 00:00:06.451 [Pipeline] catchError 00:00:06.453 [Pipeline] { 00:00:06.464 [Pipeline] wrap 00:00:06.472 [Pipeline] { 00:00:06.479 [Pipeline] stage 00:00:06.480 [Pipeline] { (Prologue) 00:00:06.495 [Pipeline] echo 00:00:06.496 Node: VM-host-SM17 00:00:06.502 [Pipeline] cleanWs 00:00:06.509 [WS-CLEANUP] Deleting project workspace... 00:00:06.509 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.515 [WS-CLEANUP] done 00:00:06.700 [Pipeline] setCustomBuildProperty 00:00:06.774 [Pipeline] httpRequest 00:00:07.481 [Pipeline] echo 00:00:07.482 Sorcerer 10.211.164.20 is alive 00:00:07.489 [Pipeline] retry 00:00:07.491 [Pipeline] { 00:00:07.502 [Pipeline] httpRequest 00:00:07.507 HttpMethod: GET 00:00:07.508 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.508 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.521 Response Code: HTTP/1.1 200 OK 00:00:07.521 Success: Status code 200 is in the accepted range: 200,404 00:00:07.522 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.080 [Pipeline] } 00:00:15.091 [Pipeline] // retry 00:00:15.097 [Pipeline] sh 00:00:15.372 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.388 [Pipeline] httpRequest 00:00:15.922 [Pipeline] echo 00:00:15.924 Sorcerer 10.211.164.20 is alive 00:00:15.934 [Pipeline] retry 00:00:15.936 [Pipeline] { 00:00:15.950 [Pipeline] httpRequest 00:00:15.954 HttpMethod: GET 00:00:15.955 URL: http://10.211.164.20/packages/spdk_53ca6a88509de90de88d1fa95d7fbe9678bc6467.tar.gz 00:00:15.956 Sending request to url: http://10.211.164.20/packages/spdk_53ca6a88509de90de88d1fa95d7fbe9678bc6467.tar.gz 00:00:15.972 Response Code: HTTP/1.1 200 OK 00:00:15.973 Success: Status code 200 is in the accepted range: 200,404 00:00:15.974 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_53ca6a88509de90de88d1fa95d7fbe9678bc6467.tar.gz 00:02:29.044 [Pipeline] } 00:02:29.062 [Pipeline] // retry 00:02:29.070 [Pipeline] sh 00:02:29.353 + tar --no-same-owner -xf spdk_53ca6a88509de90de88d1fa95d7fbe9678bc6467.tar.gz 00:02:32.653 [Pipeline] sh 00:02:32.933 + git -C spdk log --oneline -n5 00:02:32.933 53ca6a885 bdev/nvme: Rearrange fields in spdk_bdev_nvme_opts to reduce holes. 00:02:32.933 03b7aa9c7 bdev/nvme: Move the spdk_bdev_nvme_opts and spdk_bdev_timeout_action struct to the public header. 00:02:32.933 d47eb51c9 bdev: fix a race between reset start and complete 00:02:32.933 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:02:32.933 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:02:32.952 [Pipeline] writeFile 00:02:32.967 [Pipeline] sh 00:02:33.263 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:33.332 [Pipeline] sh 00:02:33.615 + cat autorun-spdk.conf 00:02:33.615 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:33.615 SPDK_TEST_NVMF=1 00:02:33.615 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:33.615 SPDK_TEST_URING=1 00:02:33.615 SPDK_TEST_USDT=1 00:02:33.615 SPDK_RUN_UBSAN=1 00:02:33.615 NET_TYPE=virt 00:02:33.615 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:33.622 RUN_NIGHTLY=0 00:02:33.624 [Pipeline] } 00:02:33.640 [Pipeline] // stage 00:02:33.658 [Pipeline] stage 00:02:33.660 [Pipeline] { (Run VM) 00:02:33.675 [Pipeline] sh 00:02:33.958 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:33.959 + echo 'Start stage prepare_nvme.sh' 00:02:33.959 Start stage prepare_nvme.sh 00:02:33.959 + [[ -n 2 ]] 00:02:33.959 + disk_prefix=ex2 00:02:33.959 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:02:33.959 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:02:33.959 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:02:33.959 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:33.959 ++ SPDK_TEST_NVMF=1 00:02:33.959 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:33.959 ++ SPDK_TEST_URING=1 00:02:33.959 ++ SPDK_TEST_USDT=1 00:02:33.959 ++ SPDK_RUN_UBSAN=1 00:02:33.959 ++ NET_TYPE=virt 00:02:33.959 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:33.959 ++ RUN_NIGHTLY=0 00:02:33.959 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:02:33.959 + nvme_files=() 00:02:33.959 + declare -A nvme_files 00:02:33.959 + backend_dir=/var/lib/libvirt/images/backends 00:02:33.959 + nvme_files['nvme.img']=5G 00:02:33.959 + nvme_files['nvme-cmb.img']=5G 00:02:33.959 + nvme_files['nvme-multi0.img']=4G 00:02:33.959 + nvme_files['nvme-multi1.img']=4G 00:02:33.959 + nvme_files['nvme-multi2.img']=4G 00:02:33.959 + nvme_files['nvme-openstack.img']=8G 00:02:33.959 + nvme_files['nvme-zns.img']=5G 00:02:33.959 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:33.959 + (( SPDK_TEST_FTL == 1 )) 00:02:33.959 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:33.959 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:33.959 + for nvme in "${!nvme_files[@]}" 00:02:33.959 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:02:33.959 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:33.959 + for nvme in "${!nvme_files[@]}" 00:02:33.959 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:02:33.959 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:33.959 + for nvme in "${!nvme_files[@]}" 00:02:33.959 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:02:33.959 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:33.959 + for nvme in "${!nvme_files[@]}" 00:02:33.959 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:02:33.959 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:33.959 + for nvme in "${!nvme_files[@]}" 00:02:33.959 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:02:33.959 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:33.959 + for nvme in "${!nvme_files[@]}" 00:02:33.959 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:02:33.959 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:33.959 + for nvme in "${!nvme_files[@]}" 00:02:33.959 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:02:33.959 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:33.959 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:02:33.959 + echo 'End stage prepare_nvme.sh' 00:02:33.959 End stage prepare_nvme.sh 00:02:33.971 [Pipeline] sh 00:02:34.252 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:34.252 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:02:34.252 00:02:34.252 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:02:34.252 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:02:34.252 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:02:34.252 HELP=0 00:02:34.252 DRY_RUN=0 00:02:34.252 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:02:34.252 NVME_DISKS_TYPE=nvme,nvme, 00:02:34.252 NVME_AUTO_CREATE=0 00:02:34.252 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:02:34.252 NVME_CMB=,, 00:02:34.252 NVME_PMR=,, 00:02:34.252 NVME_ZNS=,, 00:02:34.252 NVME_MS=,, 00:02:34.252 NVME_FDP=,, 00:02:34.252 SPDK_VAGRANT_DISTRO=fedora39 00:02:34.252 SPDK_VAGRANT_VMCPU=10 00:02:34.252 SPDK_VAGRANT_VMRAM=12288 00:02:34.252 SPDK_VAGRANT_PROVIDER=libvirt 00:02:34.252 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:34.252 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:34.252 SPDK_OPENSTACK_NETWORK=0 00:02:34.252 VAGRANT_PACKAGE_BOX=0 00:02:34.252 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:02:34.252 FORCE_DISTRO=true 00:02:34.252 VAGRANT_BOX_VERSION= 00:02:34.252 EXTRA_VAGRANTFILES= 00:02:34.252 NIC_MODEL=e1000 00:02:34.252 00:02:34.252 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt' 00:02:34.252 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:02:37.543 Bringing machine 'default' up with 'libvirt' provider... 00:02:37.803 ==> default: Creating image (snapshot of base box volume). 00:02:37.803 ==> default: Creating domain with the following settings... 00:02:37.803 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732008625_5354ffa4ffc454db0f54 00:02:37.803 ==> default: -- Domain type: kvm 00:02:37.803 ==> default: -- Cpus: 10 00:02:37.803 ==> default: -- Feature: acpi 00:02:37.803 ==> default: -- Feature: apic 00:02:37.803 ==> default: -- Feature: pae 00:02:37.803 ==> default: -- Memory: 12288M 00:02:37.803 ==> default: -- Memory Backing: hugepages: 00:02:37.803 ==> default: -- Management MAC: 00:02:37.803 ==> default: -- Loader: 00:02:37.803 ==> default: -- Nvram: 00:02:37.803 ==> default: -- Base box: spdk/fedora39 00:02:37.803 ==> default: -- Storage pool: default 00:02:37.803 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732008625_5354ffa4ffc454db0f54.img (20G) 00:02:37.803 ==> default: -- Volume Cache: default 00:02:37.803 ==> default: -- Kernel: 00:02:37.803 ==> default: -- Initrd: 00:02:37.803 ==> default: -- Graphics Type: vnc 00:02:37.803 ==> default: -- Graphics Port: -1 00:02:37.803 ==> default: -- Graphics IP: 127.0.0.1 00:02:37.803 ==> default: -- Graphics Password: Not defined 00:02:37.803 ==> default: -- Video Type: cirrus 00:02:37.803 ==> default: -- Video VRAM: 9216 00:02:37.803 ==> default: -- Sound Type: 00:02:37.803 ==> default: -- Keymap: en-us 00:02:37.803 ==> default: -- TPM Path: 00:02:37.803 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:37.803 ==> default: -- Command line args: 00:02:37.803 ==> default: -> value=-device, 00:02:37.803 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:37.803 ==> default: -> value=-drive, 00:02:37.803 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:02:37.803 ==> default: -> value=-device, 00:02:37.803 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:37.803 ==> default: -> value=-device, 00:02:37.803 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:37.803 ==> default: -> value=-drive, 00:02:37.803 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:37.803 ==> default: -> value=-device, 00:02:37.803 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:37.803 ==> default: -> value=-drive, 00:02:37.803 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:37.803 ==> default: -> value=-device, 00:02:37.803 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:37.803 ==> default: -> value=-drive, 00:02:37.803 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:37.803 ==> default: -> value=-device, 00:02:37.803 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:38.063 ==> default: Creating shared folders metadata... 00:02:38.063 ==> default: Starting domain. 00:02:39.443 ==> default: Waiting for domain to get an IP address... 00:02:57.535 ==> default: Waiting for SSH to become available... 00:02:57.535 ==> default: Configuring and enabling network interfaces... 00:02:59.478 default: SSH address: 192.168.121.71:22 00:02:59.478 default: SSH username: vagrant 00:02:59.478 default: SSH auth method: private key 00:03:02.015 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:10.134 ==> default: Mounting SSHFS shared folder... 00:03:10.702 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:10.702 ==> default: Checking Mount.. 00:03:12.081 ==> default: Folder Successfully Mounted! 00:03:12.081 ==> default: Running provisioner: file... 00:03:12.649 default: ~/.gitconfig => .gitconfig 00:03:13.217 00:03:13.217 SUCCESS! 00:03:13.217 00:03:13.217 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:03:13.217 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:13.217 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:03:13.217 00:03:13.227 [Pipeline] } 00:03:13.242 [Pipeline] // stage 00:03:13.251 [Pipeline] dir 00:03:13.252 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt 00:03:13.253 [Pipeline] { 00:03:13.266 [Pipeline] catchError 00:03:13.267 [Pipeline] { 00:03:13.279 [Pipeline] sh 00:03:13.559 + vagrant ssh-config --host vagrant 00:03:13.560 + + sed -ne /^Host/,$p 00:03:13.560 tee ssh_conf 00:03:16.848 Host vagrant 00:03:16.848 HostName 192.168.121.71 00:03:16.848 User vagrant 00:03:16.848 Port 22 00:03:16.848 UserKnownHostsFile /dev/null 00:03:16.848 StrictHostKeyChecking no 00:03:16.848 PasswordAuthentication no 00:03:16.848 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:16.848 IdentitiesOnly yes 00:03:16.848 LogLevel FATAL 00:03:16.848 ForwardAgent yes 00:03:16.848 ForwardX11 yes 00:03:16.848 00:03:16.862 [Pipeline] withEnv 00:03:16.864 [Pipeline] { 00:03:16.878 [Pipeline] sh 00:03:17.159 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:17.159 source /etc/os-release 00:03:17.159 [[ -e /image.version ]] && img=$(< /image.version) 00:03:17.159 # Minimal, systemd-like check. 00:03:17.159 if [[ -e /.dockerenv ]]; then 00:03:17.159 # Clear garbage from the node's name: 00:03:17.159 # agt-er_autotest_547-896 -> autotest_547-896 00:03:17.159 # $HOSTNAME is the actual container id 00:03:17.159 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:17.159 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:17.159 # We can assume this is a mount from a host where container is running, 00:03:17.159 # so fetch its hostname to easily identify the target swarm worker. 00:03:17.159 container="$(< /etc/hostname) ($agent)" 00:03:17.159 else 00:03:17.159 # Fallback 00:03:17.159 container=$agent 00:03:17.159 fi 00:03:17.159 fi 00:03:17.159 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:17.159 00:03:17.430 [Pipeline] } 00:03:17.447 [Pipeline] // withEnv 00:03:17.457 [Pipeline] setCustomBuildProperty 00:03:17.473 [Pipeline] stage 00:03:17.476 [Pipeline] { (Tests) 00:03:17.495 [Pipeline] sh 00:03:17.775 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:18.047 [Pipeline] sh 00:03:18.327 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:18.600 [Pipeline] timeout 00:03:18.601 Timeout set to expire in 1 hr 0 min 00:03:18.603 [Pipeline] { 00:03:18.618 [Pipeline] sh 00:03:18.903 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:19.470 HEAD is now at 53ca6a885 bdev/nvme: Rearrange fields in spdk_bdev_nvme_opts to reduce holes. 00:03:19.481 [Pipeline] sh 00:03:19.781 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:20.061 [Pipeline] sh 00:03:20.341 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:20.358 [Pipeline] sh 00:03:20.638 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:03:20.897 ++ readlink -f spdk_repo 00:03:20.897 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:20.897 + [[ -n /home/vagrant/spdk_repo ]] 00:03:20.897 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:20.897 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:20.897 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:20.897 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:20.897 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:20.897 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:03:20.897 + cd /home/vagrant/spdk_repo 00:03:20.897 + source /etc/os-release 00:03:20.897 ++ NAME='Fedora Linux' 00:03:20.897 ++ VERSION='39 (Cloud Edition)' 00:03:20.897 ++ ID=fedora 00:03:20.897 ++ VERSION_ID=39 00:03:20.897 ++ VERSION_CODENAME= 00:03:20.897 ++ PLATFORM_ID=platform:f39 00:03:20.897 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:20.897 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:20.897 ++ LOGO=fedora-logo-icon 00:03:20.897 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:20.897 ++ HOME_URL=https://fedoraproject.org/ 00:03:20.897 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:20.897 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:20.897 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:20.897 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:20.897 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:20.897 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:20.897 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:20.897 ++ SUPPORT_END=2024-11-12 00:03:20.897 ++ VARIANT='Cloud Edition' 00:03:20.897 ++ VARIANT_ID=cloud 00:03:20.897 + uname -a 00:03:20.897 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:20.897 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:21.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:21.464 Hugepages 00:03:21.464 node hugesize free / total 00:03:21.464 node0 1048576kB 0 / 0 00:03:21.464 node0 2048kB 0 / 0 00:03:21.464 00:03:21.464 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.464 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:21.464 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:21.464 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:21.464 + rm -f /tmp/spdk-ld-path 00:03:21.464 + source autorun-spdk.conf 00:03:21.464 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:21.464 ++ SPDK_TEST_NVMF=1 00:03:21.464 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:21.464 ++ SPDK_TEST_URING=1 00:03:21.464 ++ SPDK_TEST_USDT=1 00:03:21.464 ++ SPDK_RUN_UBSAN=1 00:03:21.464 ++ NET_TYPE=virt 00:03:21.464 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:21.464 ++ RUN_NIGHTLY=0 00:03:21.464 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:21.465 + [[ -n '' ]] 00:03:21.465 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:21.465 + for M in /var/spdk/build-*-manifest.txt 00:03:21.465 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:21.465 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:21.465 + for M in /var/spdk/build-*-manifest.txt 00:03:21.465 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:21.465 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:21.465 + for M in /var/spdk/build-*-manifest.txt 00:03:21.465 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:21.465 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:21.465 ++ uname 00:03:21.465 + [[ Linux == \L\i\n\u\x ]] 00:03:21.465 + sudo dmesg -T 00:03:21.465 + sudo dmesg --clear 00:03:21.465 + dmesg_pid=5201 00:03:21.465 + sudo dmesg -Tw 00:03:21.465 + [[ Fedora Linux == FreeBSD ]] 00:03:21.465 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:21.465 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:21.465 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:21.465 + [[ -x /usr/src/fio-static/fio ]] 00:03:21.465 + export FIO_BIN=/usr/src/fio-static/fio 00:03:21.465 + FIO_BIN=/usr/src/fio-static/fio 00:03:21.465 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:21.465 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:21.465 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:21.465 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:21.465 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:21.465 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:21.465 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:21.465 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:21.465 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:21.465 09:31:09 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:21.465 09:31:09 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:21.465 09:31:09 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:21.465 09:31:09 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:21.465 09:31:09 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:21.465 09:31:09 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:03:21.465 09:31:09 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:03:21.465 09:31:09 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:21.465 09:31:09 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:03:21.465 09:31:09 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:21.465 09:31:09 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:21.465 09:31:09 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:21.465 09:31:09 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:21.722 09:31:09 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:21.723 09:31:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:21.723 09:31:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:21.723 09:31:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:21.723 09:31:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:21.723 09:31:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:21.723 09:31:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.723 09:31:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.723 09:31:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.723 09:31:09 -- paths/export.sh@5 -- $ export PATH 00:03:21.723 09:31:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.723 09:31:09 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:21.723 09:31:09 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:21.723 09:31:09 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732008669.XXXXXX 00:03:21.723 09:31:09 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732008669.wVrxhn 00:03:21.723 09:31:09 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:21.723 09:31:09 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:21.723 09:31:09 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:21.723 09:31:09 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:21.723 09:31:09 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:21.723 09:31:09 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:21.723 09:31:09 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:21.723 09:31:09 -- common/autotest_common.sh@10 -- $ set +x 00:03:21.723 09:31:09 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:03:21.723 09:31:09 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:21.723 09:31:09 -- pm/common@17 -- $ local monitor 00:03:21.723 09:31:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.723 09:31:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.723 09:31:09 -- pm/common@25 -- $ sleep 1 00:03:21.723 09:31:09 -- pm/common@21 -- $ date +%s 00:03:21.723 09:31:09 -- pm/common@21 -- $ date +%s 00:03:21.723 09:31:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732008669 00:03:21.723 09:31:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732008669 00:03:21.723 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732008669_collect-cpu-load.pm.log 00:03:21.723 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732008669_collect-vmstat.pm.log 00:03:22.659 09:31:10 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:22.659 09:31:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:22.659 09:31:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:22.659 09:31:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:22.659 09:31:10 -- spdk/autobuild.sh@16 -- $ date -u 00:03:22.659 Tue Nov 19 09:31:10 AM UTC 2024 00:03:22.659 09:31:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:22.659 v25.01-pre-192-g53ca6a885 00:03:22.659 09:31:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:22.659 09:31:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:22.659 09:31:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:22.659 09:31:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:22.659 09:31:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:22.659 09:31:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:22.659 ************************************ 00:03:22.659 START TEST ubsan 00:03:22.659 ************************************ 00:03:22.659 using ubsan 00:03:22.659 09:31:10 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:22.659 00:03:22.659 real 0m0.000s 00:03:22.659 user 0m0.000s 00:03:22.659 sys 0m0.000s 00:03:22.659 09:31:10 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:22.659 09:31:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:22.659 ************************************ 00:03:22.659 END TEST ubsan 00:03:22.659 ************************************ 00:03:22.659 09:31:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:22.659 09:31:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:22.659 09:31:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:22.659 09:31:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:22.659 09:31:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:22.659 09:31:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:22.659 09:31:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:22.659 09:31:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:22.659 09:31:10 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:03:22.919 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:22.919 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:23.178 Using 'verbs' RDMA provider 00:03:38.998 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:51.211 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:51.211 Creating mk/config.mk...done. 00:03:51.211 Creating mk/cc.flags.mk...done. 00:03:51.211 Type 'make' to build. 00:03:51.211 09:31:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:51.211 09:31:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:51.211 09:31:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:51.211 09:31:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:51.211 ************************************ 00:03:51.211 START TEST make 00:03:51.211 ************************************ 00:03:51.211 09:31:38 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:51.211 make[1]: Nothing to be done for 'all'. 00:04:03.422 The Meson build system 00:04:03.422 Version: 1.5.0 00:04:03.422 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:03.422 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:03.422 Build type: native build 00:04:03.422 Program cat found: YES (/usr/bin/cat) 00:04:03.422 Project name: DPDK 00:04:03.422 Project version: 24.03.0 00:04:03.422 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:03.422 C linker for the host machine: cc ld.bfd 2.40-14 00:04:03.422 Host machine cpu family: x86_64 00:04:03.422 Host machine cpu: x86_64 00:04:03.422 Message: ## Building in Developer Mode ## 00:04:03.422 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:03.423 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:03.423 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:03.423 Program python3 found: YES (/usr/bin/python3) 00:04:03.423 Program cat found: YES (/usr/bin/cat) 00:04:03.423 Compiler for C supports arguments -march=native: YES 00:04:03.423 Checking for size of "void *" : 8 00:04:03.423 Checking for size of "void *" : 8 (cached) 00:04:03.423 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:03.423 Library m found: YES 00:04:03.423 Library numa found: YES 00:04:03.423 Has header "numaif.h" : YES 00:04:03.423 Library fdt found: NO 00:04:03.423 Library execinfo found: NO 00:04:03.423 Has header "execinfo.h" : YES 00:04:03.423 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:03.423 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:03.423 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:03.423 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:03.423 Run-time dependency openssl found: YES 3.1.1 00:04:03.423 Run-time dependency libpcap found: YES 1.10.4 00:04:03.423 Has header "pcap.h" with dependency libpcap: YES 00:04:03.423 Compiler for C supports arguments -Wcast-qual: YES 00:04:03.423 Compiler for C supports arguments -Wdeprecated: YES 00:04:03.423 Compiler for C supports arguments -Wformat: YES 00:04:03.423 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:03.423 Compiler for C supports arguments -Wformat-security: NO 00:04:03.423 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:03.423 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:03.423 Compiler for C supports arguments -Wnested-externs: YES 00:04:03.423 Compiler for C supports arguments -Wold-style-definition: YES 00:04:03.423 Compiler for C supports arguments -Wpointer-arith: YES 00:04:03.423 Compiler for C supports arguments -Wsign-compare: YES 00:04:03.423 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:03.423 Compiler for C supports arguments -Wundef: YES 00:04:03.423 Compiler for C supports arguments -Wwrite-strings: YES 00:04:03.423 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:03.423 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:03.423 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:03.423 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:03.423 Program objdump found: YES (/usr/bin/objdump) 00:04:03.423 Compiler for C supports arguments -mavx512f: YES 00:04:03.423 Checking if "AVX512 checking" compiles: YES 00:04:03.423 Fetching value of define "__SSE4_2__" : 1 00:04:03.423 Fetching value of define "__AES__" : 1 00:04:03.423 Fetching value of define "__AVX__" : 1 00:04:03.423 Fetching value of define "__AVX2__" : 1 00:04:03.423 Fetching value of define "__AVX512BW__" : (undefined) 00:04:03.423 Fetching value of define "__AVX512CD__" : (undefined) 00:04:03.423 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:03.423 Fetching value of define "__AVX512F__" : (undefined) 00:04:03.423 Fetching value of define "__AVX512VL__" : (undefined) 00:04:03.423 Fetching value of define "__PCLMUL__" : 1 00:04:03.423 Fetching value of define "__RDRND__" : 1 00:04:03.423 Fetching value of define "__RDSEED__" : 1 00:04:03.423 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:03.423 Fetching value of define "__znver1__" : (undefined) 00:04:03.423 Fetching value of define "__znver2__" : (undefined) 00:04:03.423 Fetching value of define "__znver3__" : (undefined) 00:04:03.423 Fetching value of define "__znver4__" : (undefined) 00:04:03.423 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:03.423 Message: lib/log: Defining dependency "log" 00:04:03.423 Message: lib/kvargs: Defining dependency "kvargs" 00:04:03.423 Message: lib/telemetry: Defining dependency "telemetry" 00:04:03.423 Checking for function "getentropy" : NO 00:04:03.423 Message: lib/eal: Defining dependency "eal" 00:04:03.423 Message: lib/ring: Defining dependency "ring" 00:04:03.423 Message: lib/rcu: Defining dependency "rcu" 00:04:03.423 Message: lib/mempool: Defining dependency "mempool" 00:04:03.423 Message: lib/mbuf: Defining dependency "mbuf" 00:04:03.423 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:03.423 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:03.423 Compiler for C supports arguments -mpclmul: YES 00:04:03.423 Compiler for C supports arguments -maes: YES 00:04:03.423 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:03.423 Compiler for C supports arguments -mavx512bw: YES 00:04:03.423 Compiler for C supports arguments -mavx512dq: YES 00:04:03.423 Compiler for C supports arguments -mavx512vl: YES 00:04:03.423 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:03.423 Compiler for C supports arguments -mavx2: YES 00:04:03.423 Compiler for C supports arguments -mavx: YES 00:04:03.423 Message: lib/net: Defining dependency "net" 00:04:03.423 Message: lib/meter: Defining dependency "meter" 00:04:03.423 Message: lib/ethdev: Defining dependency "ethdev" 00:04:03.423 Message: lib/pci: Defining dependency "pci" 00:04:03.423 Message: lib/cmdline: Defining dependency "cmdline" 00:04:03.423 Message: lib/hash: Defining dependency "hash" 00:04:03.423 Message: lib/timer: Defining dependency "timer" 00:04:03.423 Message: lib/compressdev: Defining dependency "compressdev" 00:04:03.423 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:03.423 Message: lib/dmadev: Defining dependency "dmadev" 00:04:03.423 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:03.423 Message: lib/power: Defining dependency "power" 00:04:03.423 Message: lib/reorder: Defining dependency "reorder" 00:04:03.423 Message: lib/security: Defining dependency "security" 00:04:03.423 Has header "linux/userfaultfd.h" : YES 00:04:03.423 Has header "linux/vduse.h" : YES 00:04:03.423 Message: lib/vhost: Defining dependency "vhost" 00:04:03.423 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:03.423 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:03.423 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:03.423 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:03.423 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:03.423 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:03.423 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:03.423 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:03.423 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:03.423 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:03.423 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:03.423 Configuring doxy-api-html.conf using configuration 00:04:03.423 Configuring doxy-api-man.conf using configuration 00:04:03.423 Program mandb found: YES (/usr/bin/mandb) 00:04:03.423 Program sphinx-build found: NO 00:04:03.423 Configuring rte_build_config.h using configuration 00:04:03.423 Message: 00:04:03.423 ================= 00:04:03.423 Applications Enabled 00:04:03.423 ================= 00:04:03.423 00:04:03.423 apps: 00:04:03.423 00:04:03.423 00:04:03.423 Message: 00:04:03.423 ================= 00:04:03.423 Libraries Enabled 00:04:03.423 ================= 00:04:03.423 00:04:03.423 libs: 00:04:03.423 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:03.424 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:03.424 cryptodev, dmadev, power, reorder, security, vhost, 00:04:03.424 00:04:03.424 Message: 00:04:03.424 =============== 00:04:03.424 Drivers Enabled 00:04:03.424 =============== 00:04:03.424 00:04:03.424 common: 00:04:03.424 00:04:03.424 bus: 00:04:03.424 pci, vdev, 00:04:03.424 mempool: 00:04:03.424 ring, 00:04:03.424 dma: 00:04:03.424 00:04:03.424 net: 00:04:03.424 00:04:03.424 crypto: 00:04:03.424 00:04:03.424 compress: 00:04:03.424 00:04:03.424 vdpa: 00:04:03.424 00:04:03.424 00:04:03.424 Message: 00:04:03.424 ================= 00:04:03.424 Content Skipped 00:04:03.424 ================= 00:04:03.424 00:04:03.424 apps: 00:04:03.424 dumpcap: explicitly disabled via build config 00:04:03.424 graph: explicitly disabled via build config 00:04:03.424 pdump: explicitly disabled via build config 00:04:03.424 proc-info: explicitly disabled via build config 00:04:03.424 test-acl: explicitly disabled via build config 00:04:03.424 test-bbdev: explicitly disabled via build config 00:04:03.424 test-cmdline: explicitly disabled via build config 00:04:03.424 test-compress-perf: explicitly disabled via build config 00:04:03.424 test-crypto-perf: explicitly disabled via build config 00:04:03.424 test-dma-perf: explicitly disabled via build config 00:04:03.424 test-eventdev: explicitly disabled via build config 00:04:03.424 test-fib: explicitly disabled via build config 00:04:03.424 test-flow-perf: explicitly disabled via build config 00:04:03.424 test-gpudev: explicitly disabled via build config 00:04:03.424 test-mldev: explicitly disabled via build config 00:04:03.424 test-pipeline: explicitly disabled via build config 00:04:03.424 test-pmd: explicitly disabled via build config 00:04:03.424 test-regex: explicitly disabled via build config 00:04:03.424 test-sad: explicitly disabled via build config 00:04:03.424 test-security-perf: explicitly disabled via build config 00:04:03.424 00:04:03.424 libs: 00:04:03.424 argparse: explicitly disabled via build config 00:04:03.424 metrics: explicitly disabled via build config 00:04:03.424 acl: explicitly disabled via build config 00:04:03.424 bbdev: explicitly disabled via build config 00:04:03.424 bitratestats: explicitly disabled via build config 00:04:03.424 bpf: explicitly disabled via build config 00:04:03.424 cfgfile: explicitly disabled via build config 00:04:03.424 distributor: explicitly disabled via build config 00:04:03.424 efd: explicitly disabled via build config 00:04:03.424 eventdev: explicitly disabled via build config 00:04:03.424 dispatcher: explicitly disabled via build config 00:04:03.424 gpudev: explicitly disabled via build config 00:04:03.424 gro: explicitly disabled via build config 00:04:03.424 gso: explicitly disabled via build config 00:04:03.424 ip_frag: explicitly disabled via build config 00:04:03.424 jobstats: explicitly disabled via build config 00:04:03.424 latencystats: explicitly disabled via build config 00:04:03.424 lpm: explicitly disabled via build config 00:04:03.424 member: explicitly disabled via build config 00:04:03.424 pcapng: explicitly disabled via build config 00:04:03.424 rawdev: explicitly disabled via build config 00:04:03.424 regexdev: explicitly disabled via build config 00:04:03.424 mldev: explicitly disabled via build config 00:04:03.424 rib: explicitly disabled via build config 00:04:03.424 sched: explicitly disabled via build config 00:04:03.424 stack: explicitly disabled via build config 00:04:03.424 ipsec: explicitly disabled via build config 00:04:03.424 pdcp: explicitly disabled via build config 00:04:03.424 fib: explicitly disabled via build config 00:04:03.424 port: explicitly disabled via build config 00:04:03.424 pdump: explicitly disabled via build config 00:04:03.424 table: explicitly disabled via build config 00:04:03.424 pipeline: explicitly disabled via build config 00:04:03.424 graph: explicitly disabled via build config 00:04:03.424 node: explicitly disabled via build config 00:04:03.424 00:04:03.424 drivers: 00:04:03.424 common/cpt: not in enabled drivers build config 00:04:03.424 common/dpaax: not in enabled drivers build config 00:04:03.424 common/iavf: not in enabled drivers build config 00:04:03.424 common/idpf: not in enabled drivers build config 00:04:03.424 common/ionic: not in enabled drivers build config 00:04:03.424 common/mvep: not in enabled drivers build config 00:04:03.424 common/octeontx: not in enabled drivers build config 00:04:03.424 bus/auxiliary: not in enabled drivers build config 00:04:03.424 bus/cdx: not in enabled drivers build config 00:04:03.424 bus/dpaa: not in enabled drivers build config 00:04:03.424 bus/fslmc: not in enabled drivers build config 00:04:03.424 bus/ifpga: not in enabled drivers build config 00:04:03.424 bus/platform: not in enabled drivers build config 00:04:03.424 bus/uacce: not in enabled drivers build config 00:04:03.424 bus/vmbus: not in enabled drivers build config 00:04:03.424 common/cnxk: not in enabled drivers build config 00:04:03.424 common/mlx5: not in enabled drivers build config 00:04:03.424 common/nfp: not in enabled drivers build config 00:04:03.424 common/nitrox: not in enabled drivers build config 00:04:03.424 common/qat: not in enabled drivers build config 00:04:03.424 common/sfc_efx: not in enabled drivers build config 00:04:03.424 mempool/bucket: not in enabled drivers build config 00:04:03.424 mempool/cnxk: not in enabled drivers build config 00:04:03.424 mempool/dpaa: not in enabled drivers build config 00:04:03.424 mempool/dpaa2: not in enabled drivers build config 00:04:03.424 mempool/octeontx: not in enabled drivers build config 00:04:03.424 mempool/stack: not in enabled drivers build config 00:04:03.424 dma/cnxk: not in enabled drivers build config 00:04:03.424 dma/dpaa: not in enabled drivers build config 00:04:03.424 dma/dpaa2: not in enabled drivers build config 00:04:03.424 dma/hisilicon: not in enabled drivers build config 00:04:03.424 dma/idxd: not in enabled drivers build config 00:04:03.424 dma/ioat: not in enabled drivers build config 00:04:03.424 dma/skeleton: not in enabled drivers build config 00:04:03.424 net/af_packet: not in enabled drivers build config 00:04:03.424 net/af_xdp: not in enabled drivers build config 00:04:03.424 net/ark: not in enabled drivers build config 00:04:03.424 net/atlantic: not in enabled drivers build config 00:04:03.424 net/avp: not in enabled drivers build config 00:04:03.424 net/axgbe: not in enabled drivers build config 00:04:03.424 net/bnx2x: not in enabled drivers build config 00:04:03.424 net/bnxt: not in enabled drivers build config 00:04:03.424 net/bonding: not in enabled drivers build config 00:04:03.424 net/cnxk: not in enabled drivers build config 00:04:03.424 net/cpfl: not in enabled drivers build config 00:04:03.424 net/cxgbe: not in enabled drivers build config 00:04:03.424 net/dpaa: not in enabled drivers build config 00:04:03.424 net/dpaa2: not in enabled drivers build config 00:04:03.424 net/e1000: not in enabled drivers build config 00:04:03.424 net/ena: not in enabled drivers build config 00:04:03.424 net/enetc: not in enabled drivers build config 00:04:03.424 net/enetfec: not in enabled drivers build config 00:04:03.424 net/enic: not in enabled drivers build config 00:04:03.424 net/failsafe: not in enabled drivers build config 00:04:03.424 net/fm10k: not in enabled drivers build config 00:04:03.424 net/gve: not in enabled drivers build config 00:04:03.424 net/hinic: not in enabled drivers build config 00:04:03.424 net/hns3: not in enabled drivers build config 00:04:03.424 net/i40e: not in enabled drivers build config 00:04:03.424 net/iavf: not in enabled drivers build config 00:04:03.424 net/ice: not in enabled drivers build config 00:04:03.424 net/idpf: not in enabled drivers build config 00:04:03.424 net/igc: not in enabled drivers build config 00:04:03.424 net/ionic: not in enabled drivers build config 00:04:03.424 net/ipn3ke: not in enabled drivers build config 00:04:03.424 net/ixgbe: not in enabled drivers build config 00:04:03.424 net/mana: not in enabled drivers build config 00:04:03.424 net/memif: not in enabled drivers build config 00:04:03.424 net/mlx4: not in enabled drivers build config 00:04:03.425 net/mlx5: not in enabled drivers build config 00:04:03.425 net/mvneta: not in enabled drivers build config 00:04:03.425 net/mvpp2: not in enabled drivers build config 00:04:03.425 net/netvsc: not in enabled drivers build config 00:04:03.425 net/nfb: not in enabled drivers build config 00:04:03.425 net/nfp: not in enabled drivers build config 00:04:03.425 net/ngbe: not in enabled drivers build config 00:04:03.425 net/null: not in enabled drivers build config 00:04:03.425 net/octeontx: not in enabled drivers build config 00:04:03.425 net/octeon_ep: not in enabled drivers build config 00:04:03.425 net/pcap: not in enabled drivers build config 00:04:03.425 net/pfe: not in enabled drivers build config 00:04:03.425 net/qede: not in enabled drivers build config 00:04:03.425 net/ring: not in enabled drivers build config 00:04:03.425 net/sfc: not in enabled drivers build config 00:04:03.425 net/softnic: not in enabled drivers build config 00:04:03.425 net/tap: not in enabled drivers build config 00:04:03.425 net/thunderx: not in enabled drivers build config 00:04:03.425 net/txgbe: not in enabled drivers build config 00:04:03.425 net/vdev_netvsc: not in enabled drivers build config 00:04:03.425 net/vhost: not in enabled drivers build config 00:04:03.425 net/virtio: not in enabled drivers build config 00:04:03.425 net/vmxnet3: not in enabled drivers build config 00:04:03.425 raw/*: missing internal dependency, "rawdev" 00:04:03.425 crypto/armv8: not in enabled drivers build config 00:04:03.425 crypto/bcmfs: not in enabled drivers build config 00:04:03.425 crypto/caam_jr: not in enabled drivers build config 00:04:03.425 crypto/ccp: not in enabled drivers build config 00:04:03.425 crypto/cnxk: not in enabled drivers build config 00:04:03.425 crypto/dpaa_sec: not in enabled drivers build config 00:04:03.425 crypto/dpaa2_sec: not in enabled drivers build config 00:04:03.425 crypto/ipsec_mb: not in enabled drivers build config 00:04:03.425 crypto/mlx5: not in enabled drivers build config 00:04:03.425 crypto/mvsam: not in enabled drivers build config 00:04:03.425 crypto/nitrox: not in enabled drivers build config 00:04:03.425 crypto/null: not in enabled drivers build config 00:04:03.425 crypto/octeontx: not in enabled drivers build config 00:04:03.425 crypto/openssl: not in enabled drivers build config 00:04:03.425 crypto/scheduler: not in enabled drivers build config 00:04:03.425 crypto/uadk: not in enabled drivers build config 00:04:03.425 crypto/virtio: not in enabled drivers build config 00:04:03.425 compress/isal: not in enabled drivers build config 00:04:03.425 compress/mlx5: not in enabled drivers build config 00:04:03.425 compress/nitrox: not in enabled drivers build config 00:04:03.425 compress/octeontx: not in enabled drivers build config 00:04:03.425 compress/zlib: not in enabled drivers build config 00:04:03.425 regex/*: missing internal dependency, "regexdev" 00:04:03.425 ml/*: missing internal dependency, "mldev" 00:04:03.425 vdpa/ifc: not in enabled drivers build config 00:04:03.425 vdpa/mlx5: not in enabled drivers build config 00:04:03.425 vdpa/nfp: not in enabled drivers build config 00:04:03.425 vdpa/sfc: not in enabled drivers build config 00:04:03.425 event/*: missing internal dependency, "eventdev" 00:04:03.425 baseband/*: missing internal dependency, "bbdev" 00:04:03.425 gpu/*: missing internal dependency, "gpudev" 00:04:03.425 00:04:03.425 00:04:03.425 Build targets in project: 85 00:04:03.425 00:04:03.425 DPDK 24.03.0 00:04:03.425 00:04:03.425 User defined options 00:04:03.425 buildtype : debug 00:04:03.425 default_library : shared 00:04:03.425 libdir : lib 00:04:03.425 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:03.425 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:03.425 c_link_args : 00:04:03.425 cpu_instruction_set: native 00:04:03.425 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:03.425 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:03.425 enable_docs : false 00:04:03.425 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:03.425 enable_kmods : false 00:04:03.425 max_lcores : 128 00:04:03.425 tests : false 00:04:03.425 00:04:03.425 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:03.684 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:03.684 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:03.943 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:03.943 [3/268] Linking static target lib/librte_kvargs.a 00:04:03.943 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:03.943 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:03.943 [6/268] Linking static target lib/librte_log.a 00:04:04.515 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.515 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:04.515 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:04.515 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:04.773 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:04.773 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:04.773 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:04.773 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:04.773 [15/268] Linking static target lib/librte_telemetry.a 00:04:04.773 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:04.773 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:04.773 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.773 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:05.032 [20/268] Linking target lib/librte_log.so.24.1 00:04:05.291 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:05.291 [22/268] Linking target lib/librte_kvargs.so.24.1 00:04:05.549 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:05.549 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:05.549 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:05.549 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:05.549 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.549 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:05.549 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:05.549 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:05.549 [31/268] Linking target lib/librte_telemetry.so.24.1 00:04:05.809 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:05.809 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:05.809 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:05.809 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:06.068 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:06.068 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:06.327 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:06.327 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:06.585 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:06.585 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:06.585 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:06.585 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:06.585 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:06.585 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:06.585 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:06.844 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:06.844 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:06.844 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:07.102 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:07.102 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:07.361 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:07.361 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:07.620 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:07.620 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:07.620 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:07.620 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:07.620 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:07.620 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:07.879 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:07.879 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:07.879 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:08.448 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:08.448 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:08.448 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:08.448 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:08.448 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:08.707 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:08.707 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:08.967 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:08.967 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:08.967 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:08.967 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:08.967 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:08.967 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:08.967 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:09.226 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:09.226 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:09.485 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:09.485 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:09.485 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:09.485 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:09.745 [83/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:09.745 [84/268] Linking static target lib/librte_rcu.a 00:04:09.745 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:09.745 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:09.745 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:09.745 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:10.007 [89/268] Linking static target lib/librte_ring.a 00:04:10.007 [90/268] Linking static target lib/librte_eal.a 00:04:10.007 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:10.275 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:10.275 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:10.275 [94/268] Linking static target lib/librte_mempool.a 00:04:10.275 [95/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.275 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.534 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:10.534 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:10.534 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:10.534 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:10.534 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:10.793 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:10.793 [103/268] Linking static target lib/librte_mbuf.a 00:04:11.052 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:11.052 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:11.052 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:11.052 [107/268] Linking static target lib/librte_meter.a 00:04:11.052 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:11.312 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:11.312 [110/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:11.312 [111/268] Linking static target lib/librte_net.a 00:04:11.312 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:11.571 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.571 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.832 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:11.832 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:11.832 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.832 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:11.832 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.399 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:12.657 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:12.657 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:12.916 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:12.916 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:12.916 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:12.916 [126/268] Linking static target lib/librte_pci.a 00:04:12.916 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:13.175 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:13.175 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:13.175 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:13.175 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:13.175 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:13.175 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:13.175 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:13.175 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:13.175 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:13.434 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:13.434 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.434 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:13.434 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:13.434 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:13.434 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:13.434 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:13.434 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:13.692 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:13.692 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:13.692 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:13.950 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:13.950 [149/268] Linking static target lib/librte_cmdline.a 00:04:13.950 [150/268] Linking static target lib/librte_ethdev.a 00:04:14.208 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:14.208 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:14.208 [153/268] Linking static target lib/librte_timer.a 00:04:14.208 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:14.208 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:14.466 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:14.466 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:14.725 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:14.725 [159/268] Linking static target lib/librte_hash.a 00:04:14.725 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:14.725 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:14.725 [162/268] Linking static target lib/librte_compressdev.a 00:04:14.725 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.983 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:15.241 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:15.241 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:15.241 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:15.499 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:15.499 [169/268] Linking static target lib/librte_dmadev.a 00:04:15.499 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.499 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:15.758 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:15.758 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:15.758 [174/268] Linking static target lib/librte_cryptodev.a 00:04:15.758 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:15.758 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.017 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:16.017 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.275 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:16.275 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:16.275 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.533 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:16.533 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:16.533 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:16.533 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:16.792 [186/268] Linking static target lib/librte_power.a 00:04:16.792 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:16.792 [188/268] Linking static target lib/librte_reorder.a 00:04:17.051 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:17.051 [190/268] Linking static target lib/librte_security.a 00:04:17.051 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:17.310 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:17.310 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:17.310 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.569 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:17.828 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.828 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.089 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:18.089 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:18.089 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.348 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:18.348 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:18.607 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:18.607 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:18.865 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:18.866 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:18.866 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:19.124 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:19.124 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:19.124 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:19.124 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:19.124 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:19.382 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:19.382 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:19.382 [215/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:19.382 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:19.382 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:19.382 [218/268] Linking static target drivers/librte_bus_pci.a 00:04:19.382 [219/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:19.382 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:19.382 [221/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:19.382 [222/268] Linking static target drivers/librte_bus_vdev.a 00:04:19.641 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:19.641 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:19.641 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:19.641 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:19.641 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.899 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.467 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:20.467 [230/268] Linking static target lib/librte_vhost.a 00:04:21.034 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.034 [232/268] Linking target lib/librte_eal.so.24.1 00:04:21.293 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:21.293 [234/268] Linking target lib/librte_dmadev.so.24.1 00:04:21.293 [235/268] Linking target lib/librte_ring.so.24.1 00:04:21.293 [236/268] Linking target lib/librte_timer.so.24.1 00:04:21.293 [237/268] Linking target lib/librte_meter.so.24.1 00:04:21.293 [238/268] Linking target lib/librte_pci.so.24.1 00:04:21.293 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:21.293 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:21.293 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:21.293 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:21.293 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:21.293 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:21.293 [245/268] Linking target lib/librte_rcu.so.24.1 00:04:21.293 [246/268] Linking target lib/librte_mempool.so.24.1 00:04:21.552 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:21.552 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:21.552 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:21.552 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:21.552 [251/268] Linking target lib/librte_mbuf.so.24.1 00:04:21.844 [252/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.844 [253/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.844 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:21.844 [255/268] Linking target lib/librte_compressdev.so.24.1 00:04:21.844 [256/268] Linking target lib/librte_net.so.24.1 00:04:21.844 [257/268] Linking target lib/librte_reorder.so.24.1 00:04:21.844 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:21.844 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:21.844 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:21.844 [261/268] Linking target lib/librte_hash.so.24.1 00:04:21.844 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:21.844 [263/268] Linking target lib/librte_security.so.24.1 00:04:21.844 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:22.103 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:22.103 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:22.103 [267/268] Linking target lib/librte_power.so.24.1 00:04:22.103 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:22.103 INFO: autodetecting backend as ninja 00:04:22.103 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:48.659 CC lib/log/log_flags.o 00:04:48.659 CC lib/log/log.o 00:04:48.659 CC lib/log/log_deprecated.o 00:04:48.659 CC lib/ut_mock/mock.o 00:04:48.659 CC lib/ut/ut.o 00:04:48.659 LIB libspdk_log.a 00:04:48.659 LIB libspdk_ut_mock.a 00:04:48.659 LIB libspdk_ut.a 00:04:48.659 SO libspdk_ut_mock.so.6.0 00:04:48.659 SO libspdk_ut.so.2.0 00:04:48.659 SO libspdk_log.so.7.1 00:04:48.659 SYMLINK libspdk_ut_mock.so 00:04:48.659 SYMLINK libspdk_ut.so 00:04:48.659 SYMLINK libspdk_log.so 00:04:48.659 CC lib/dma/dma.o 00:04:48.659 CC lib/util/base64.o 00:04:48.659 CC lib/util/bit_array.o 00:04:48.659 CC lib/util/cpuset.o 00:04:48.659 CXX lib/trace_parser/trace.o 00:04:48.659 CC lib/util/crc16.o 00:04:48.659 CC lib/util/crc32.o 00:04:48.659 CC lib/util/crc32c.o 00:04:48.659 CC lib/ioat/ioat.o 00:04:48.659 CC lib/vfio_user/host/vfio_user_pci.o 00:04:48.659 CC lib/util/crc32_ieee.o 00:04:48.659 CC lib/util/crc64.o 00:04:48.659 CC lib/util/dif.o 00:04:48.659 CC lib/vfio_user/host/vfio_user.o 00:04:48.659 CC lib/util/fd.o 00:04:48.659 LIB libspdk_dma.a 00:04:48.659 CC lib/util/fd_group.o 00:04:48.659 SO libspdk_dma.so.5.0 00:04:48.659 SYMLINK libspdk_dma.so 00:04:48.659 CC lib/util/file.o 00:04:48.659 CC lib/util/hexlify.o 00:04:48.659 CC lib/util/iov.o 00:04:48.659 CC lib/util/math.o 00:04:48.659 LIB libspdk_ioat.a 00:04:48.659 SO libspdk_ioat.so.7.0 00:04:48.659 CC lib/util/net.o 00:04:48.659 SYMLINK libspdk_ioat.so 00:04:48.659 LIB libspdk_vfio_user.a 00:04:48.659 CC lib/util/pipe.o 00:04:48.659 SO libspdk_vfio_user.so.5.0 00:04:48.659 CC lib/util/strerror_tls.o 00:04:48.659 CC lib/util/string.o 00:04:48.659 CC lib/util/uuid.o 00:04:48.659 SYMLINK libspdk_vfio_user.so 00:04:48.659 CC lib/util/xor.o 00:04:48.659 CC lib/util/zipf.o 00:04:48.918 CC lib/util/md5.o 00:04:48.918 LIB libspdk_util.a 00:04:49.177 SO libspdk_util.so.10.1 00:04:49.177 LIB libspdk_trace_parser.a 00:04:49.435 SO libspdk_trace_parser.so.6.0 00:04:49.435 SYMLINK libspdk_util.so 00:04:49.435 SYMLINK libspdk_trace_parser.so 00:04:49.435 CC lib/rdma_utils/rdma_utils.o 00:04:49.435 CC lib/conf/conf.o 00:04:49.435 CC lib/idxd/idxd.o 00:04:49.435 CC lib/env_dpdk/env.o 00:04:49.435 CC lib/idxd/idxd_user.o 00:04:49.435 CC lib/idxd/idxd_kernel.o 00:04:49.435 CC lib/env_dpdk/pci.o 00:04:49.435 CC lib/env_dpdk/memory.o 00:04:49.435 CC lib/json/json_parse.o 00:04:49.435 CC lib/vmd/vmd.o 00:04:49.694 CC lib/vmd/led.o 00:04:49.694 LIB libspdk_conf.a 00:04:49.694 LIB libspdk_rdma_utils.a 00:04:49.953 CC lib/json/json_util.o 00:04:49.953 SO libspdk_conf.so.6.0 00:04:49.953 SO libspdk_rdma_utils.so.1.0 00:04:49.953 CC lib/json/json_write.o 00:04:49.953 SYMLINK libspdk_rdma_utils.so 00:04:49.953 SYMLINK libspdk_conf.so 00:04:49.953 CC lib/env_dpdk/init.o 00:04:49.953 CC lib/env_dpdk/threads.o 00:04:49.953 CC lib/env_dpdk/pci_ioat.o 00:04:49.953 CC lib/env_dpdk/pci_virtio.o 00:04:50.211 CC lib/env_dpdk/pci_vmd.o 00:04:50.211 CC lib/rdma_provider/common.o 00:04:50.211 LIB libspdk_idxd.a 00:04:50.211 CC lib/env_dpdk/pci_idxd.o 00:04:50.211 SO libspdk_idxd.so.12.1 00:04:50.211 LIB libspdk_vmd.a 00:04:50.211 LIB libspdk_json.a 00:04:50.211 SO libspdk_vmd.so.6.0 00:04:50.211 SO libspdk_json.so.6.0 00:04:50.211 SYMLINK libspdk_idxd.so 00:04:50.211 CC lib/env_dpdk/pci_event.o 00:04:50.211 CC lib/env_dpdk/sigbus_handler.o 00:04:50.211 CC lib/env_dpdk/pci_dpdk.o 00:04:50.211 SYMLINK libspdk_vmd.so 00:04:50.211 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:50.211 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:50.211 SYMLINK libspdk_json.so 00:04:50.211 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:50.498 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:50.498 CC lib/jsonrpc/jsonrpc_client.o 00:04:50.498 CC lib/jsonrpc/jsonrpc_server.o 00:04:50.498 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:50.498 LIB libspdk_rdma_provider.a 00:04:50.498 SO libspdk_rdma_provider.so.7.0 00:04:50.778 SYMLINK libspdk_rdma_provider.so 00:04:50.778 LIB libspdk_jsonrpc.a 00:04:50.778 SO libspdk_jsonrpc.so.6.0 00:04:51.037 SYMLINK libspdk_jsonrpc.so 00:04:51.037 LIB libspdk_env_dpdk.a 00:04:51.037 SO libspdk_env_dpdk.so.15.1 00:04:51.296 CC lib/rpc/rpc.o 00:04:51.296 SYMLINK libspdk_env_dpdk.so 00:04:51.555 LIB libspdk_rpc.a 00:04:51.555 SO libspdk_rpc.so.6.0 00:04:51.555 SYMLINK libspdk_rpc.so 00:04:51.813 CC lib/trace/trace_flags.o 00:04:51.813 CC lib/trace/trace.o 00:04:51.813 CC lib/trace/trace_rpc.o 00:04:51.813 CC lib/keyring/keyring_rpc.o 00:04:51.813 CC lib/keyring/keyring.o 00:04:51.813 CC lib/notify/notify.o 00:04:51.813 CC lib/notify/notify_rpc.o 00:04:52.073 LIB libspdk_keyring.a 00:04:52.073 LIB libspdk_notify.a 00:04:52.073 SO libspdk_keyring.so.2.0 00:04:52.073 SO libspdk_notify.so.6.0 00:04:52.073 LIB libspdk_trace.a 00:04:52.073 SYMLINK libspdk_keyring.so 00:04:52.073 SYMLINK libspdk_notify.so 00:04:52.073 SO libspdk_trace.so.11.0 00:04:52.331 SYMLINK libspdk_trace.so 00:04:52.589 CC lib/thread/thread.o 00:04:52.589 CC lib/thread/iobuf.o 00:04:52.589 CC lib/sock/sock.o 00:04:52.589 CC lib/sock/sock_rpc.o 00:04:53.154 LIB libspdk_sock.a 00:04:53.154 SO libspdk_sock.so.10.0 00:04:53.154 SYMLINK libspdk_sock.so 00:04:53.413 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:53.413 CC lib/nvme/nvme_ctrlr.o 00:04:53.413 CC lib/nvme/nvme_fabric.o 00:04:53.413 CC lib/nvme/nvme_ns_cmd.o 00:04:53.413 CC lib/nvme/nvme_pcie_common.o 00:04:53.413 CC lib/nvme/nvme_ns.o 00:04:53.413 CC lib/nvme/nvme_pcie.o 00:04:53.413 CC lib/nvme/nvme_qpair.o 00:04:53.413 CC lib/nvme/nvme.o 00:04:53.982 LIB libspdk_thread.a 00:04:53.982 SO libspdk_thread.so.11.0 00:04:54.241 SYMLINK libspdk_thread.so 00:04:54.241 CC lib/nvme/nvme_quirks.o 00:04:54.241 CC lib/nvme/nvme_transport.o 00:04:54.241 CC lib/nvme/nvme_discovery.o 00:04:54.241 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:54.499 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:54.499 CC lib/nvme/nvme_tcp.o 00:04:54.499 CC lib/nvme/nvme_opal.o 00:04:54.499 CC lib/nvme/nvme_io_msg.o 00:04:54.499 CC lib/nvme/nvme_poll_group.o 00:04:54.758 CC lib/nvme/nvme_zns.o 00:04:54.758 CC lib/nvme/nvme_stubs.o 00:04:55.017 CC lib/nvme/nvme_auth.o 00:04:55.017 CC lib/nvme/nvme_cuse.o 00:04:55.017 CC lib/nvme/nvme_rdma.o 00:04:55.276 CC lib/accel/accel.o 00:04:55.276 CC lib/accel/accel_rpc.o 00:04:55.276 CC lib/blob/blobstore.o 00:04:55.556 CC lib/blob/request.o 00:04:55.556 CC lib/accel/accel_sw.o 00:04:55.556 CC lib/init/json_config.o 00:04:55.815 CC lib/init/subsystem.o 00:04:55.815 CC lib/init/subsystem_rpc.o 00:04:55.815 CC lib/init/rpc.o 00:04:55.815 CC lib/blob/zeroes.o 00:04:56.073 CC lib/blob/blob_bs_dev.o 00:04:56.073 CC lib/virtio/virtio.o 00:04:56.073 CC lib/virtio/virtio_vhost_user.o 00:04:56.073 CC lib/virtio/virtio_vfio_user.o 00:04:56.073 CC lib/fsdev/fsdev.o 00:04:56.073 CC lib/fsdev/fsdev_io.o 00:04:56.073 LIB libspdk_init.a 00:04:56.073 SO libspdk_init.so.6.0 00:04:56.333 SYMLINK libspdk_init.so 00:04:56.333 CC lib/fsdev/fsdev_rpc.o 00:04:56.333 CC lib/virtio/virtio_pci.o 00:04:56.333 LIB libspdk_accel.a 00:04:56.333 LIB libspdk_nvme.a 00:04:56.333 SO libspdk_accel.so.16.0 00:04:56.591 CC lib/event/app.o 00:04:56.591 CC lib/event/reactor.o 00:04:56.591 CC lib/event/log_rpc.o 00:04:56.591 CC lib/event/app_rpc.o 00:04:56.591 CC lib/event/scheduler_static.o 00:04:56.591 SYMLINK libspdk_accel.so 00:04:56.591 LIB libspdk_virtio.a 00:04:56.591 SO libspdk_virtio.so.7.0 00:04:56.591 SO libspdk_nvme.so.15.0 00:04:56.591 SYMLINK libspdk_virtio.so 00:04:56.849 CC lib/bdev/bdev_rpc.o 00:04:56.849 CC lib/bdev/bdev.o 00:04:56.849 CC lib/bdev/bdev_zone.o 00:04:56.849 CC lib/bdev/part.o 00:04:56.849 CC lib/bdev/scsi_nvme.o 00:04:56.849 LIB libspdk_fsdev.a 00:04:56.849 SYMLINK libspdk_nvme.so 00:04:56.849 SO libspdk_fsdev.so.2.0 00:04:56.849 LIB libspdk_event.a 00:04:56.849 SYMLINK libspdk_fsdev.so 00:04:57.107 SO libspdk_event.so.14.0 00:04:57.107 SYMLINK libspdk_event.so 00:04:57.107 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:58.042 LIB libspdk_fuse_dispatcher.a 00:04:58.042 SO libspdk_fuse_dispatcher.so.1.0 00:04:58.042 SYMLINK libspdk_fuse_dispatcher.so 00:04:58.978 LIB libspdk_blob.a 00:04:58.978 SO libspdk_blob.so.11.0 00:04:58.978 SYMLINK libspdk_blob.so 00:04:59.236 CC lib/blobfs/tree.o 00:04:59.236 CC lib/blobfs/blobfs.o 00:04:59.236 CC lib/lvol/lvol.o 00:04:59.804 LIB libspdk_bdev.a 00:04:59.804 SO libspdk_bdev.so.17.0 00:04:59.804 SYMLINK libspdk_bdev.so 00:05:00.063 CC lib/scsi/dev.o 00:05:00.063 CC lib/scsi/lun.o 00:05:00.063 CC lib/scsi/port.o 00:05:00.063 LIB libspdk_blobfs.a 00:05:00.063 CC lib/ublk/ublk.o 00:05:00.063 CC lib/nbd/nbd.o 00:05:00.063 CC lib/scsi/scsi.o 00:05:00.063 CC lib/ftl/ftl_core.o 00:05:00.063 CC lib/nvmf/ctrlr.o 00:05:00.063 SO libspdk_blobfs.so.10.0 00:05:00.321 SYMLINK libspdk_blobfs.so 00:05:00.321 CC lib/scsi/scsi_bdev.o 00:05:00.321 LIB libspdk_lvol.a 00:05:00.321 SO libspdk_lvol.so.10.0 00:05:00.321 CC lib/scsi/scsi_pr.o 00:05:00.321 CC lib/nvmf/ctrlr_discovery.o 00:05:00.321 SYMLINK libspdk_lvol.so 00:05:00.321 CC lib/nvmf/ctrlr_bdev.o 00:05:00.321 CC lib/nbd/nbd_rpc.o 00:05:00.579 CC lib/nvmf/subsystem.o 00:05:00.579 CC lib/nvmf/nvmf.o 00:05:00.579 LIB libspdk_nbd.a 00:05:00.579 SO libspdk_nbd.so.7.0 00:05:00.838 SYMLINK libspdk_nbd.so 00:05:00.838 CC lib/nvmf/nvmf_rpc.o 00:05:00.838 CC lib/nvmf/transport.o 00:05:00.838 CC lib/scsi/scsi_rpc.o 00:05:00.838 CC lib/ftl/ftl_init.o 00:05:00.838 CC lib/ublk/ublk_rpc.o 00:05:00.838 CC lib/scsi/task.o 00:05:01.096 CC lib/nvmf/tcp.o 00:05:01.096 LIB libspdk_ublk.a 00:05:01.096 CC lib/nvmf/stubs.o 00:05:01.096 SO libspdk_ublk.so.3.0 00:05:01.096 LIB libspdk_scsi.a 00:05:01.096 CC lib/ftl/ftl_layout.o 00:05:01.096 SYMLINK libspdk_ublk.so 00:05:01.096 CC lib/nvmf/mdns_server.o 00:05:01.096 SO libspdk_scsi.so.9.0 00:05:01.354 SYMLINK libspdk_scsi.so 00:05:01.354 CC lib/nvmf/rdma.o 00:05:01.354 CC lib/nvmf/auth.o 00:05:01.612 CC lib/ftl/ftl_debug.o 00:05:01.612 CC lib/ftl/ftl_io.o 00:05:01.612 CC lib/ftl/ftl_sb.o 00:05:01.869 CC lib/iscsi/conn.o 00:05:01.869 CC lib/iscsi/init_grp.o 00:05:01.869 CC lib/iscsi/iscsi.o 00:05:01.869 CC lib/ftl/ftl_l2p.o 00:05:01.869 CC lib/vhost/vhost.o 00:05:01.869 CC lib/vhost/vhost_rpc.o 00:05:02.126 CC lib/vhost/vhost_scsi.o 00:05:02.126 CC lib/ftl/ftl_l2p_flat.o 00:05:02.126 CC lib/iscsi/param.o 00:05:02.384 CC lib/vhost/vhost_blk.o 00:05:02.384 CC lib/ftl/ftl_nv_cache.o 00:05:02.641 CC lib/vhost/rte_vhost_user.o 00:05:02.641 CC lib/ftl/ftl_band.o 00:05:02.906 CC lib/ftl/ftl_band_ops.o 00:05:02.906 CC lib/ftl/ftl_writer.o 00:05:02.906 CC lib/ftl/ftl_rq.o 00:05:02.906 CC lib/ftl/ftl_reloc.o 00:05:03.164 CC lib/ftl/ftl_l2p_cache.o 00:05:03.164 CC lib/ftl/ftl_p2l.o 00:05:03.164 CC lib/iscsi/portal_grp.o 00:05:03.164 CC lib/iscsi/tgt_node.o 00:05:03.164 CC lib/iscsi/iscsi_subsystem.o 00:05:03.164 CC lib/ftl/ftl_p2l_log.o 00:05:03.421 CC lib/ftl/mngt/ftl_mngt.o 00:05:03.421 LIB libspdk_nvmf.a 00:05:03.421 CC lib/iscsi/iscsi_rpc.o 00:05:03.421 CC lib/iscsi/task.o 00:05:03.421 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:03.421 SO libspdk_nvmf.so.20.0 00:05:03.687 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:03.687 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:03.687 LIB libspdk_vhost.a 00:05:03.687 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:03.687 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:03.687 SYMLINK libspdk_nvmf.so 00:05:03.687 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:03.687 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:03.687 SO libspdk_vhost.so.8.0 00:05:03.945 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:03.945 SYMLINK libspdk_vhost.so 00:05:03.945 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:03.945 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:03.945 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:03.945 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:03.945 LIB libspdk_iscsi.a 00:05:03.945 CC lib/ftl/utils/ftl_conf.o 00:05:03.945 CC lib/ftl/utils/ftl_md.o 00:05:03.945 SO libspdk_iscsi.so.8.0 00:05:03.945 CC lib/ftl/utils/ftl_mempool.o 00:05:04.203 CC lib/ftl/utils/ftl_bitmap.o 00:05:04.203 CC lib/ftl/utils/ftl_property.o 00:05:04.203 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:04.203 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:04.203 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:04.203 SYMLINK libspdk_iscsi.so 00:05:04.203 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:04.203 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:04.203 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:04.203 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:04.461 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:04.461 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:04.461 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:04.461 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:04.461 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:04.461 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:04.461 CC lib/ftl/base/ftl_base_dev.o 00:05:04.719 CC lib/ftl/base/ftl_base_bdev.o 00:05:04.719 CC lib/ftl/ftl_trace.o 00:05:04.977 LIB libspdk_ftl.a 00:05:05.235 SO libspdk_ftl.so.9.0 00:05:05.492 SYMLINK libspdk_ftl.so 00:05:05.749 CC module/env_dpdk/env_dpdk_rpc.o 00:05:06.009 CC module/accel/error/accel_error.o 00:05:06.009 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:06.009 CC module/sock/posix/posix.o 00:05:06.009 CC module/keyring/linux/keyring.o 00:05:06.009 CC module/blob/bdev/blob_bdev.o 00:05:06.009 CC module/scheduler/gscheduler/gscheduler.o 00:05:06.009 CC module/fsdev/aio/fsdev_aio.o 00:05:06.009 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:06.009 CC module/keyring/file/keyring.o 00:05:06.009 LIB libspdk_env_dpdk_rpc.a 00:05:06.009 SO libspdk_env_dpdk_rpc.so.6.0 00:05:06.267 LIB libspdk_scheduler_dynamic.a 00:05:06.267 CC module/keyring/linux/keyring_rpc.o 00:05:06.267 SYMLINK libspdk_env_dpdk_rpc.so 00:05:06.267 CC module/keyring/file/keyring_rpc.o 00:05:06.267 SO libspdk_scheduler_dynamic.so.4.0 00:05:06.267 LIB libspdk_scheduler_gscheduler.a 00:05:06.267 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:06.267 LIB libspdk_scheduler_dpdk_governor.a 00:05:06.267 SO libspdk_scheduler_gscheduler.so.4.0 00:05:06.267 SYMLINK libspdk_scheduler_dynamic.so 00:05:06.267 CC module/fsdev/aio/linux_aio_mgr.o 00:05:06.267 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:06.267 CC module/accel/error/accel_error_rpc.o 00:05:06.267 SYMLINK libspdk_scheduler_gscheduler.so 00:05:06.525 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:06.525 LIB libspdk_keyring_linux.a 00:05:06.525 LIB libspdk_blob_bdev.a 00:05:06.525 SO libspdk_keyring_linux.so.1.0 00:05:06.525 LIB libspdk_keyring_file.a 00:05:06.525 SO libspdk_blob_bdev.so.11.0 00:05:06.525 SO libspdk_keyring_file.so.2.0 00:05:06.525 SYMLINK libspdk_keyring_linux.so 00:05:06.525 SYMLINK libspdk_blob_bdev.so 00:05:06.525 SYMLINK libspdk_keyring_file.so 00:05:06.525 LIB libspdk_accel_error.a 00:05:06.525 CC module/accel/ioat/accel_ioat.o 00:05:06.525 CC module/accel/ioat/accel_ioat_rpc.o 00:05:06.525 CC module/accel/dsa/accel_dsa.o 00:05:06.525 CC module/sock/uring/uring.o 00:05:06.783 LIB libspdk_fsdev_aio.a 00:05:06.783 SO libspdk_accel_error.so.2.0 00:05:06.783 SO libspdk_fsdev_aio.so.1.0 00:05:06.783 SYMLINK libspdk_accel_error.so 00:05:06.783 CC module/accel/dsa/accel_dsa_rpc.o 00:05:06.783 CC module/accel/iaa/accel_iaa.o 00:05:06.783 LIB libspdk_sock_posix.a 00:05:06.783 SYMLINK libspdk_fsdev_aio.so 00:05:06.783 CC module/accel/iaa/accel_iaa_rpc.o 00:05:07.043 SO libspdk_sock_posix.so.6.0 00:05:07.043 LIB libspdk_accel_ioat.a 00:05:07.043 SO libspdk_accel_ioat.so.6.0 00:05:07.043 CC module/bdev/delay/vbdev_delay.o 00:05:07.043 LIB libspdk_accel_dsa.a 00:05:07.043 CC module/blobfs/bdev/blobfs_bdev.o 00:05:07.043 SYMLINK libspdk_sock_posix.so 00:05:07.043 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:07.043 SO libspdk_accel_dsa.so.5.0 00:05:07.043 SYMLINK libspdk_accel_ioat.so 00:05:07.043 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:07.043 CC module/bdev/error/vbdev_error.o 00:05:07.043 SYMLINK libspdk_accel_dsa.so 00:05:07.301 LIB libspdk_accel_iaa.a 00:05:07.301 CC module/bdev/gpt/gpt.o 00:05:07.301 SO libspdk_accel_iaa.so.3.0 00:05:07.301 LIB libspdk_blobfs_bdev.a 00:05:07.301 CC module/bdev/lvol/vbdev_lvol.o 00:05:07.301 SO libspdk_blobfs_bdev.so.6.0 00:05:07.301 SYMLINK libspdk_accel_iaa.so 00:05:07.301 CC module/bdev/malloc/bdev_malloc.o 00:05:07.301 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:07.559 SYMLINK libspdk_blobfs_bdev.so 00:05:07.559 CC module/bdev/error/vbdev_error_rpc.o 00:05:07.559 LIB libspdk_sock_uring.a 00:05:07.559 CC module/bdev/gpt/vbdev_gpt.o 00:05:07.559 SO libspdk_sock_uring.so.5.0 00:05:07.559 CC module/bdev/null/bdev_null.o 00:05:07.559 CC module/bdev/nvme/bdev_nvme.o 00:05:07.559 LIB libspdk_bdev_delay.a 00:05:07.559 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:07.559 SO libspdk_bdev_delay.so.6.0 00:05:07.559 SYMLINK libspdk_sock_uring.so 00:05:07.559 CC module/bdev/nvme/nvme_rpc.o 00:05:07.817 SYMLINK libspdk_bdev_delay.so 00:05:07.817 CC module/bdev/nvme/bdev_mdns_client.o 00:05:07.817 LIB libspdk_bdev_error.a 00:05:07.817 LIB libspdk_bdev_gpt.a 00:05:07.817 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:07.817 CC module/bdev/null/bdev_null_rpc.o 00:05:07.817 SO libspdk_bdev_gpt.so.6.0 00:05:07.817 SO libspdk_bdev_error.so.6.0 00:05:07.817 SYMLINK libspdk_bdev_gpt.so 00:05:07.817 CC module/bdev/nvme/vbdev_opal.o 00:05:07.817 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:07.817 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:07.817 SYMLINK libspdk_bdev_error.so 00:05:08.076 LIB libspdk_bdev_malloc.a 00:05:08.076 SO libspdk_bdev_malloc.so.6.0 00:05:08.076 LIB libspdk_bdev_null.a 00:05:08.076 SO libspdk_bdev_null.so.6.0 00:05:08.076 SYMLINK libspdk_bdev_malloc.so 00:05:08.076 CC module/bdev/passthru/vbdev_passthru.o 00:05:08.076 CC module/bdev/raid/bdev_raid.o 00:05:08.076 SYMLINK libspdk_bdev_null.so 00:05:08.076 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:08.334 LIB libspdk_bdev_lvol.a 00:05:08.334 SO libspdk_bdev_lvol.so.6.0 00:05:08.334 CC module/bdev/split/vbdev_split.o 00:05:08.334 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:08.334 CC module/bdev/uring/bdev_uring.o 00:05:08.334 SYMLINK libspdk_bdev_lvol.so 00:05:08.334 CC module/bdev/uring/bdev_uring_rpc.o 00:05:08.334 CC module/bdev/aio/bdev_aio.o 00:05:08.334 CC module/bdev/aio/bdev_aio_rpc.o 00:05:08.334 LIB libspdk_bdev_passthru.a 00:05:08.592 SO libspdk_bdev_passthru.so.6.0 00:05:08.592 SYMLINK libspdk_bdev_passthru.so 00:05:08.592 CC module/bdev/split/vbdev_split_rpc.o 00:05:08.592 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:08.592 CC module/bdev/raid/bdev_raid_rpc.o 00:05:08.592 LIB libspdk_bdev_uring.a 00:05:08.592 CC module/bdev/raid/bdev_raid_sb.o 00:05:08.592 SO libspdk_bdev_uring.so.6.0 00:05:08.851 CC module/bdev/ftl/bdev_ftl.o 00:05:08.851 LIB libspdk_bdev_zone_block.a 00:05:08.851 LIB libspdk_bdev_split.a 00:05:08.851 LIB libspdk_bdev_aio.a 00:05:08.851 SO libspdk_bdev_zone_block.so.6.0 00:05:08.851 SO libspdk_bdev_split.so.6.0 00:05:08.851 SO libspdk_bdev_aio.so.6.0 00:05:08.851 SYMLINK libspdk_bdev_uring.so 00:05:08.851 SYMLINK libspdk_bdev_zone_block.so 00:05:08.851 CC module/bdev/raid/raid0.o 00:05:08.851 CC module/bdev/raid/raid1.o 00:05:08.851 CC module/bdev/iscsi/bdev_iscsi.o 00:05:08.851 SYMLINK libspdk_bdev_split.so 00:05:08.851 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:08.851 SYMLINK libspdk_bdev_aio.so 00:05:08.851 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:08.851 CC module/bdev/raid/concat.o 00:05:09.418 LIB libspdk_bdev_iscsi.a 00:05:09.418 LIB libspdk_bdev_ftl.a 00:05:09.418 LIB libspdk_bdev_raid.a 00:05:09.418 SO libspdk_bdev_iscsi.so.6.0 00:05:09.418 SO libspdk_bdev_ftl.so.6.0 00:05:09.418 SO libspdk_bdev_raid.so.6.0 00:05:09.418 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:09.418 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:09.418 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:09.418 SYMLINK libspdk_bdev_iscsi.so 00:05:09.418 SYMLINK libspdk_bdev_ftl.so 00:05:09.418 SYMLINK libspdk_bdev_raid.so 00:05:09.985 LIB libspdk_bdev_virtio.a 00:05:09.985 SO libspdk_bdev_virtio.so.6.0 00:05:10.244 SYMLINK libspdk_bdev_virtio.so 00:05:10.502 LIB libspdk_bdev_nvme.a 00:05:10.502 SO libspdk_bdev_nvme.so.7.1 00:05:10.761 SYMLINK libspdk_bdev_nvme.so 00:05:11.329 CC module/event/subsystems/fsdev/fsdev.o 00:05:11.329 CC module/event/subsystems/vmd/vmd.o 00:05:11.329 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:11.329 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:11.329 CC module/event/subsystems/iobuf/iobuf.o 00:05:11.329 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:11.329 CC module/event/subsystems/scheduler/scheduler.o 00:05:11.329 CC module/event/subsystems/sock/sock.o 00:05:11.329 CC module/event/subsystems/keyring/keyring.o 00:05:11.329 LIB libspdk_event_vhost_blk.a 00:05:11.329 SO libspdk_event_vhost_blk.so.3.0 00:05:11.329 LIB libspdk_event_scheduler.a 00:05:11.329 SO libspdk_event_scheduler.so.4.0 00:05:11.329 SYMLINK libspdk_event_vhost_blk.so 00:05:11.329 LIB libspdk_event_keyring.a 00:05:11.329 LIB libspdk_event_vmd.a 00:05:11.329 LIB libspdk_event_fsdev.a 00:05:11.587 LIB libspdk_event_iobuf.a 00:05:11.587 SO libspdk_event_fsdev.so.1.0 00:05:11.587 SO libspdk_event_keyring.so.1.0 00:05:11.587 LIB libspdk_event_sock.a 00:05:11.587 SYMLINK libspdk_event_scheduler.so 00:05:11.587 SO libspdk_event_vmd.so.6.0 00:05:11.587 SO libspdk_event_sock.so.5.0 00:05:11.587 SO libspdk_event_iobuf.so.3.0 00:05:11.587 SYMLINK libspdk_event_fsdev.so 00:05:11.587 SYMLINK libspdk_event_keyring.so 00:05:11.587 SYMLINK libspdk_event_vmd.so 00:05:11.587 SYMLINK libspdk_event_sock.so 00:05:11.587 SYMLINK libspdk_event_iobuf.so 00:05:11.844 CC module/event/subsystems/accel/accel.o 00:05:12.117 LIB libspdk_event_accel.a 00:05:12.117 SO libspdk_event_accel.so.6.0 00:05:12.117 SYMLINK libspdk_event_accel.so 00:05:12.409 CC module/event/subsystems/bdev/bdev.o 00:05:12.667 LIB libspdk_event_bdev.a 00:05:12.667 SO libspdk_event_bdev.so.6.0 00:05:12.667 SYMLINK libspdk_event_bdev.so 00:05:12.925 CC module/event/subsystems/nbd/nbd.o 00:05:12.925 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:12.925 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:12.925 CC module/event/subsystems/ublk/ublk.o 00:05:12.925 CC module/event/subsystems/scsi/scsi.o 00:05:12.925 LIB libspdk_event_nbd.a 00:05:12.925 SO libspdk_event_nbd.so.6.0 00:05:13.183 LIB libspdk_event_scsi.a 00:05:13.183 LIB libspdk_event_ublk.a 00:05:13.183 SYMLINK libspdk_event_nbd.so 00:05:13.183 SO libspdk_event_ublk.so.3.0 00:05:13.183 SO libspdk_event_scsi.so.6.0 00:05:13.183 LIB libspdk_event_nvmf.a 00:05:13.183 SYMLINK libspdk_event_ublk.so 00:05:13.183 SYMLINK libspdk_event_scsi.so 00:05:13.183 SO libspdk_event_nvmf.so.6.0 00:05:13.183 SYMLINK libspdk_event_nvmf.so 00:05:13.442 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:13.442 CC module/event/subsystems/iscsi/iscsi.o 00:05:13.701 LIB libspdk_event_vhost_scsi.a 00:05:13.701 LIB libspdk_event_iscsi.a 00:05:13.701 SO libspdk_event_vhost_scsi.so.3.0 00:05:13.701 SO libspdk_event_iscsi.so.6.0 00:05:13.701 SYMLINK libspdk_event_vhost_scsi.so 00:05:13.701 SYMLINK libspdk_event_iscsi.so 00:05:13.959 SO libspdk.so.6.0 00:05:13.959 SYMLINK libspdk.so 00:05:13.959 CXX app/trace/trace.o 00:05:13.959 CC app/spdk_nvme_identify/identify.o 00:05:13.959 CC app/trace_record/trace_record.o 00:05:13.959 CC app/spdk_lspci/spdk_lspci.o 00:05:13.959 CC app/spdk_nvme_perf/perf.o 00:05:14.218 CC app/nvmf_tgt/nvmf_main.o 00:05:14.218 CC app/spdk_tgt/spdk_tgt.o 00:05:14.218 CC app/iscsi_tgt/iscsi_tgt.o 00:05:14.218 CC test/thread/poller_perf/poller_perf.o 00:05:14.218 CC examples/util/zipf/zipf.o 00:05:14.218 LINK spdk_lspci 00:05:14.218 LINK nvmf_tgt 00:05:14.218 LINK spdk_trace_record 00:05:14.476 LINK poller_perf 00:05:14.476 LINK spdk_tgt 00:05:14.476 LINK zipf 00:05:14.476 LINK iscsi_tgt 00:05:14.476 LINK spdk_trace 00:05:14.735 TEST_HEADER include/spdk/accel.h 00:05:14.735 TEST_HEADER include/spdk/accel_module.h 00:05:14.735 TEST_HEADER include/spdk/assert.h 00:05:14.735 TEST_HEADER include/spdk/barrier.h 00:05:14.735 TEST_HEADER include/spdk/base64.h 00:05:14.735 TEST_HEADER include/spdk/bdev.h 00:05:14.735 TEST_HEADER include/spdk/bdev_module.h 00:05:14.735 TEST_HEADER include/spdk/bdev_zone.h 00:05:14.735 TEST_HEADER include/spdk/bit_array.h 00:05:14.735 TEST_HEADER include/spdk/bit_pool.h 00:05:14.735 TEST_HEADER include/spdk/blob_bdev.h 00:05:14.735 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:14.735 TEST_HEADER include/spdk/blobfs.h 00:05:14.735 TEST_HEADER include/spdk/blob.h 00:05:14.735 TEST_HEADER include/spdk/conf.h 00:05:14.735 TEST_HEADER include/spdk/config.h 00:05:14.735 TEST_HEADER include/spdk/cpuset.h 00:05:14.735 TEST_HEADER include/spdk/crc16.h 00:05:14.735 TEST_HEADER include/spdk/crc32.h 00:05:14.735 TEST_HEADER include/spdk/crc64.h 00:05:14.735 TEST_HEADER include/spdk/dif.h 00:05:14.735 TEST_HEADER include/spdk/dma.h 00:05:14.735 TEST_HEADER include/spdk/endian.h 00:05:14.735 TEST_HEADER include/spdk/env_dpdk.h 00:05:14.735 TEST_HEADER include/spdk/env.h 00:05:14.735 TEST_HEADER include/spdk/event.h 00:05:14.735 TEST_HEADER include/spdk/fd_group.h 00:05:14.735 TEST_HEADER include/spdk/fd.h 00:05:14.735 TEST_HEADER include/spdk/file.h 00:05:14.735 CC test/dma/test_dma/test_dma.o 00:05:14.735 TEST_HEADER include/spdk/fsdev.h 00:05:14.735 TEST_HEADER include/spdk/fsdev_module.h 00:05:14.735 TEST_HEADER include/spdk/ftl.h 00:05:14.735 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:14.735 TEST_HEADER include/spdk/gpt_spec.h 00:05:14.735 TEST_HEADER include/spdk/hexlify.h 00:05:14.735 TEST_HEADER include/spdk/histogram_data.h 00:05:14.735 TEST_HEADER include/spdk/idxd.h 00:05:14.735 TEST_HEADER include/spdk/idxd_spec.h 00:05:14.735 CC examples/ioat/perf/perf.o 00:05:14.735 TEST_HEADER include/spdk/init.h 00:05:14.735 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:14.735 TEST_HEADER include/spdk/ioat.h 00:05:14.735 CC examples/ioat/verify/verify.o 00:05:14.735 TEST_HEADER include/spdk/ioat_spec.h 00:05:14.735 TEST_HEADER include/spdk/iscsi_spec.h 00:05:14.735 TEST_HEADER include/spdk/json.h 00:05:14.735 TEST_HEADER include/spdk/jsonrpc.h 00:05:14.735 TEST_HEADER include/spdk/keyring.h 00:05:14.735 TEST_HEADER include/spdk/keyring_module.h 00:05:14.735 TEST_HEADER include/spdk/likely.h 00:05:14.735 TEST_HEADER include/spdk/log.h 00:05:14.735 TEST_HEADER include/spdk/lvol.h 00:05:14.735 TEST_HEADER include/spdk/md5.h 00:05:14.735 TEST_HEADER include/spdk/memory.h 00:05:14.735 CC app/spdk_nvme_discover/discovery_aer.o 00:05:14.735 TEST_HEADER include/spdk/mmio.h 00:05:14.735 TEST_HEADER include/spdk/nbd.h 00:05:14.735 TEST_HEADER include/spdk/net.h 00:05:14.735 CC test/app/bdev_svc/bdev_svc.o 00:05:14.735 TEST_HEADER include/spdk/notify.h 00:05:14.735 TEST_HEADER include/spdk/nvme.h 00:05:14.735 TEST_HEADER include/spdk/nvme_intel.h 00:05:14.735 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:14.735 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:14.735 TEST_HEADER include/spdk/nvme_spec.h 00:05:14.735 TEST_HEADER include/spdk/nvme_zns.h 00:05:14.735 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:14.735 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:14.735 TEST_HEADER include/spdk/nvmf.h 00:05:14.735 TEST_HEADER include/spdk/nvmf_spec.h 00:05:14.735 TEST_HEADER include/spdk/nvmf_transport.h 00:05:14.993 TEST_HEADER include/spdk/opal.h 00:05:14.993 TEST_HEADER include/spdk/opal_spec.h 00:05:14.993 TEST_HEADER include/spdk/pci_ids.h 00:05:14.993 TEST_HEADER include/spdk/pipe.h 00:05:14.993 TEST_HEADER include/spdk/queue.h 00:05:14.993 TEST_HEADER include/spdk/reduce.h 00:05:14.993 TEST_HEADER include/spdk/rpc.h 00:05:14.993 TEST_HEADER include/spdk/scheduler.h 00:05:14.993 TEST_HEADER include/spdk/scsi.h 00:05:14.993 TEST_HEADER include/spdk/scsi_spec.h 00:05:14.993 TEST_HEADER include/spdk/sock.h 00:05:14.993 TEST_HEADER include/spdk/stdinc.h 00:05:14.993 TEST_HEADER include/spdk/string.h 00:05:14.993 TEST_HEADER include/spdk/thread.h 00:05:14.993 TEST_HEADER include/spdk/trace.h 00:05:14.993 TEST_HEADER include/spdk/trace_parser.h 00:05:14.993 TEST_HEADER include/spdk/tree.h 00:05:14.993 TEST_HEADER include/spdk/ublk.h 00:05:14.993 TEST_HEADER include/spdk/util.h 00:05:14.993 TEST_HEADER include/spdk/uuid.h 00:05:14.993 TEST_HEADER include/spdk/version.h 00:05:14.993 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:14.993 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:14.993 TEST_HEADER include/spdk/vhost.h 00:05:14.993 TEST_HEADER include/spdk/vmd.h 00:05:14.993 TEST_HEADER include/spdk/xor.h 00:05:14.993 TEST_HEADER include/spdk/zipf.h 00:05:14.993 CXX test/cpp_headers/accel.o 00:05:14.993 LINK interrupt_tgt 00:05:14.993 LINK ioat_perf 00:05:14.993 LINK spdk_nvme_identify 00:05:14.993 LINK verify 00:05:14.993 LINK bdev_svc 00:05:14.993 LINK spdk_nvme_discover 00:05:15.250 LINK spdk_nvme_perf 00:05:15.250 CC test/env/mem_callbacks/mem_callbacks.o 00:05:15.250 CXX test/cpp_headers/accel_module.o 00:05:15.250 CXX test/cpp_headers/assert.o 00:05:15.250 CXX test/cpp_headers/barrier.o 00:05:15.250 LINK test_dma 00:05:15.250 CC test/rpc_client/rpc_client_test.o 00:05:15.509 CC test/event/event_perf/event_perf.o 00:05:15.509 CC app/spdk_top/spdk_top.o 00:05:15.509 CXX test/cpp_headers/base64.o 00:05:15.509 CC test/env/vtophys/vtophys.o 00:05:15.509 CC examples/thread/thread/thread_ex.o 00:05:15.509 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:15.509 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:15.509 LINK rpc_client_test 00:05:15.509 LINK event_perf 00:05:15.767 CXX test/cpp_headers/bdev.o 00:05:15.767 LINK vtophys 00:05:15.767 LINK env_dpdk_post_init 00:05:15.767 LINK thread 00:05:15.767 CC examples/sock/hello_world/hello_sock.o 00:05:15.767 LINK mem_callbacks 00:05:15.767 CC test/event/reactor/reactor.o 00:05:15.767 CXX test/cpp_headers/bdev_module.o 00:05:15.767 CC examples/vmd/lsvmd/lsvmd.o 00:05:16.025 CC test/event/reactor_perf/reactor_perf.o 00:05:16.025 LINK nvme_fuzz 00:05:16.025 LINK reactor 00:05:16.025 LINK hello_sock 00:05:16.025 CC test/env/memory/memory_ut.o 00:05:16.025 LINK lsvmd 00:05:16.025 CC test/accel/dif/dif.o 00:05:16.025 LINK reactor_perf 00:05:16.025 CC test/event/app_repeat/app_repeat.o 00:05:16.025 CXX test/cpp_headers/bdev_zone.o 00:05:16.283 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:16.284 LINK app_repeat 00:05:16.284 CC test/event/scheduler/scheduler.o 00:05:16.284 CXX test/cpp_headers/bit_array.o 00:05:16.284 CC examples/vmd/led/led.o 00:05:16.284 LINK spdk_top 00:05:16.542 CC test/blobfs/mkfs/mkfs.o 00:05:16.542 CC test/lvol/esnap/esnap.o 00:05:16.542 CXX test/cpp_headers/bit_pool.o 00:05:16.542 LINK led 00:05:16.542 LINK scheduler 00:05:16.542 CC examples/idxd/perf/perf.o 00:05:16.542 LINK mkfs 00:05:16.542 CC app/vhost/vhost.o 00:05:16.800 CXX test/cpp_headers/blob_bdev.o 00:05:16.800 LINK dif 00:05:16.800 LINK vhost 00:05:16.800 CXX test/cpp_headers/blobfs_bdev.o 00:05:16.800 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:17.059 CC examples/accel/perf/accel_perf.o 00:05:17.059 LINK idxd_perf 00:05:17.059 CC examples/blob/hello_world/hello_blob.o 00:05:17.059 CXX test/cpp_headers/blobfs.o 00:05:17.317 CC test/nvme/aer/aer.o 00:05:17.317 CC app/spdk_dd/spdk_dd.o 00:05:17.317 LINK hello_fsdev 00:05:17.317 CC test/nvme/reset/reset.o 00:05:17.317 CXX test/cpp_headers/blob.o 00:05:17.317 LINK memory_ut 00:05:17.317 LINK hello_blob 00:05:17.575 LINK accel_perf 00:05:17.575 CXX test/cpp_headers/conf.o 00:05:17.575 LINK aer 00:05:17.575 LINK reset 00:05:17.575 CC test/nvme/sgl/sgl.o 00:05:17.575 CC test/env/pci/pci_ut.o 00:05:17.575 CXX test/cpp_headers/config.o 00:05:17.575 CXX test/cpp_headers/cpuset.o 00:05:17.575 CC examples/blob/cli/blobcli.o 00:05:17.575 CXX test/cpp_headers/crc16.o 00:05:17.833 LINK spdk_dd 00:05:17.833 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:17.833 LINK sgl 00:05:17.833 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:17.833 CXX test/cpp_headers/crc32.o 00:05:18.092 CC test/bdev/bdevio/bdevio.o 00:05:18.092 LINK iscsi_fuzz 00:05:18.092 LINK pci_ut 00:05:18.092 CC test/nvme/e2edp/nvme_dp.o 00:05:18.092 CXX test/cpp_headers/crc64.o 00:05:18.092 CC test/app/histogram_perf/histogram_perf.o 00:05:18.092 CC app/fio/nvme/fio_plugin.o 00:05:18.092 LINK blobcli 00:05:18.351 LINK histogram_perf 00:05:18.351 CXX test/cpp_headers/dif.o 00:05:18.351 CC test/nvme/overhead/overhead.o 00:05:18.351 LINK nvme_dp 00:05:18.351 LINK bdevio 00:05:18.351 CC test/app/jsoncat/jsoncat.o 00:05:18.351 LINK vhost_fuzz 00:05:18.609 CXX test/cpp_headers/dma.o 00:05:18.609 CC test/app/stub/stub.o 00:05:18.609 CC examples/nvme/hello_world/hello_world.o 00:05:18.609 LINK jsoncat 00:05:18.609 CC examples/nvme/reconnect/reconnect.o 00:05:18.609 LINK overhead 00:05:18.609 LINK spdk_nvme 00:05:18.609 CXX test/cpp_headers/endian.o 00:05:18.867 LINK stub 00:05:18.867 CC app/fio/bdev/fio_plugin.o 00:05:18.867 CXX test/cpp_headers/env_dpdk.o 00:05:18.867 CXX test/cpp_headers/env.o 00:05:18.867 CC test/nvme/err_injection/err_injection.o 00:05:18.867 CXX test/cpp_headers/event.o 00:05:18.867 LINK hello_world 00:05:19.126 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:19.126 LINK reconnect 00:05:19.126 CXX test/cpp_headers/fd_group.o 00:05:19.126 LINK err_injection 00:05:19.126 CC examples/nvme/hotplug/hotplug.o 00:05:19.126 CC examples/nvme/arbitration/arbitration.o 00:05:19.126 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:19.126 CC examples/nvme/abort/abort.o 00:05:19.126 CXX test/cpp_headers/fd.o 00:05:19.384 LINK spdk_bdev 00:05:19.384 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:19.384 LINK hotplug 00:05:19.384 LINK cmb_copy 00:05:19.384 CC test/nvme/startup/startup.o 00:05:19.384 CXX test/cpp_headers/file.o 00:05:19.384 CXX test/cpp_headers/fsdev.o 00:05:19.384 LINK arbitration 00:05:19.643 CXX test/cpp_headers/fsdev_module.o 00:05:19.643 CXX test/cpp_headers/ftl.o 00:05:19.643 LINK pmr_persistence 00:05:19.643 LINK startup 00:05:19.643 LINK nvme_manage 00:05:19.643 LINK abort 00:05:19.643 CXX test/cpp_headers/fuse_dispatcher.o 00:05:19.643 CXX test/cpp_headers/gpt_spec.o 00:05:19.643 CXX test/cpp_headers/hexlify.o 00:05:19.643 CXX test/cpp_headers/histogram_data.o 00:05:19.643 CXX test/cpp_headers/idxd.o 00:05:19.643 CXX test/cpp_headers/idxd_spec.o 00:05:19.902 CXX test/cpp_headers/init.o 00:05:19.902 CXX test/cpp_headers/ioat.o 00:05:19.902 CXX test/cpp_headers/ioat_spec.o 00:05:19.902 CC test/nvme/reserve/reserve.o 00:05:19.902 CC test/nvme/simple_copy/simple_copy.o 00:05:19.902 CXX test/cpp_headers/iscsi_spec.o 00:05:19.902 CXX test/cpp_headers/json.o 00:05:19.902 CXX test/cpp_headers/jsonrpc.o 00:05:20.161 CC examples/bdev/bdevperf/bdevperf.o 00:05:20.161 LINK reserve 00:05:20.161 CC examples/bdev/hello_world/hello_bdev.o 00:05:20.161 CXX test/cpp_headers/keyring.o 00:05:20.161 CXX test/cpp_headers/keyring_module.o 00:05:20.161 CC test/nvme/connect_stress/connect_stress.o 00:05:20.161 LINK simple_copy 00:05:20.161 CXX test/cpp_headers/likely.o 00:05:20.419 CC test/nvme/boot_partition/boot_partition.o 00:05:20.419 CC test/nvme/compliance/nvme_compliance.o 00:05:20.419 CXX test/cpp_headers/log.o 00:05:20.419 LINK connect_stress 00:05:20.419 LINK hello_bdev 00:05:20.419 CC test/nvme/fused_ordering/fused_ordering.o 00:05:20.419 CC test/nvme/fdp/fdp.o 00:05:20.419 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:20.677 LINK boot_partition 00:05:20.677 CXX test/cpp_headers/lvol.o 00:05:20.677 CC test/nvme/cuse/cuse.o 00:05:20.677 LINK fused_ordering 00:05:20.677 LINK doorbell_aers 00:05:20.678 CXX test/cpp_headers/md5.o 00:05:20.678 CXX test/cpp_headers/memory.o 00:05:20.937 LINK fdp 00:05:20.937 CXX test/cpp_headers/mmio.o 00:05:20.937 CXX test/cpp_headers/nbd.o 00:05:20.937 LINK nvme_compliance 00:05:20.937 CXX test/cpp_headers/net.o 00:05:20.937 CXX test/cpp_headers/notify.o 00:05:20.937 LINK bdevperf 00:05:20.937 CXX test/cpp_headers/nvme.o 00:05:21.195 CXX test/cpp_headers/nvme_intel.o 00:05:21.195 CXX test/cpp_headers/nvme_ocssd.o 00:05:21.195 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:21.195 CXX test/cpp_headers/nvme_spec.o 00:05:21.195 CXX test/cpp_headers/nvme_zns.o 00:05:21.195 CXX test/cpp_headers/nvmf_cmd.o 00:05:21.195 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:21.195 CXX test/cpp_headers/nvmf.o 00:05:21.195 CXX test/cpp_headers/nvmf_spec.o 00:05:21.195 CXX test/cpp_headers/nvmf_transport.o 00:05:21.195 CXX test/cpp_headers/opal.o 00:05:21.454 CXX test/cpp_headers/opal_spec.o 00:05:21.454 CC examples/nvmf/nvmf/nvmf.o 00:05:21.454 CXX test/cpp_headers/pci_ids.o 00:05:21.454 CXX test/cpp_headers/pipe.o 00:05:21.454 CXX test/cpp_headers/queue.o 00:05:21.454 CXX test/cpp_headers/reduce.o 00:05:21.454 CXX test/cpp_headers/rpc.o 00:05:21.454 CXX test/cpp_headers/scheduler.o 00:05:21.454 CXX test/cpp_headers/scsi.o 00:05:21.712 CXX test/cpp_headers/scsi_spec.o 00:05:21.712 CXX test/cpp_headers/sock.o 00:05:21.712 CXX test/cpp_headers/stdinc.o 00:05:21.712 LINK nvmf 00:05:21.712 CXX test/cpp_headers/string.o 00:05:21.712 CXX test/cpp_headers/thread.o 00:05:21.712 CXX test/cpp_headers/trace.o 00:05:21.712 CXX test/cpp_headers/trace_parser.o 00:05:21.712 CXX test/cpp_headers/tree.o 00:05:21.712 CXX test/cpp_headers/ublk.o 00:05:21.973 CXX test/cpp_headers/util.o 00:05:21.973 CXX test/cpp_headers/uuid.o 00:05:21.973 CXX test/cpp_headers/version.o 00:05:21.973 CXX test/cpp_headers/vfio_user_pci.o 00:05:21.973 CXX test/cpp_headers/vfio_user_spec.o 00:05:21.973 CXX test/cpp_headers/vhost.o 00:05:21.973 CXX test/cpp_headers/vmd.o 00:05:21.973 CXX test/cpp_headers/xor.o 00:05:21.973 LINK esnap 00:05:21.973 LINK cuse 00:05:21.973 CXX test/cpp_headers/zipf.o 00:05:22.540 00:05:22.540 real 1m31.581s 00:05:22.540 user 8m31.626s 00:05:22.540 sys 1m42.790s 00:05:22.540 09:33:09 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:22.540 ************************************ 00:05:22.540 END TEST make 00:05:22.540 ************************************ 00:05:22.540 09:33:09 make -- common/autotest_common.sh@10 -- $ set +x 00:05:22.540 09:33:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:22.540 09:33:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:22.540 09:33:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:22.540 09:33:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:22.540 09:33:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:22.540 09:33:09 -- pm/common@44 -- $ pid=5243 00:05:22.540 09:33:09 -- pm/common@50 -- $ kill -TERM 5243 00:05:22.540 09:33:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:22.540 09:33:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:22.540 09:33:09 -- pm/common@44 -- $ pid=5245 00:05:22.540 09:33:09 -- pm/common@50 -- $ kill -TERM 5245 00:05:22.540 09:33:09 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:22.540 09:33:09 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:22.540 09:33:10 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.540 09:33:10 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.540 09:33:10 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.540 09:33:10 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.540 09:33:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.540 09:33:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.540 09:33:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.540 09:33:10 -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.540 09:33:10 -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.540 09:33:10 -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.540 09:33:10 -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.540 09:33:10 -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.540 09:33:10 -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.540 09:33:10 -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.540 09:33:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.540 09:33:10 -- scripts/common.sh@344 -- # case "$op" in 00:05:22.540 09:33:10 -- scripts/common.sh@345 -- # : 1 00:05:22.540 09:33:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.540 09:33:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.540 09:33:10 -- scripts/common.sh@365 -- # decimal 1 00:05:22.540 09:33:10 -- scripts/common.sh@353 -- # local d=1 00:05:22.540 09:33:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.540 09:33:10 -- scripts/common.sh@355 -- # echo 1 00:05:22.540 09:33:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.540 09:33:10 -- scripts/common.sh@366 -- # decimal 2 00:05:22.540 09:33:10 -- scripts/common.sh@353 -- # local d=2 00:05:22.540 09:33:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.540 09:33:10 -- scripts/common.sh@355 -- # echo 2 00:05:22.540 09:33:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.540 09:33:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.540 09:33:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.540 09:33:10 -- scripts/common.sh@368 -- # return 0 00:05:22.540 09:33:10 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.540 09:33:10 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.540 --rc genhtml_branch_coverage=1 00:05:22.540 --rc genhtml_function_coverage=1 00:05:22.540 --rc genhtml_legend=1 00:05:22.540 --rc geninfo_all_blocks=1 00:05:22.540 --rc geninfo_unexecuted_blocks=1 00:05:22.540 00:05:22.540 ' 00:05:22.540 09:33:10 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.540 --rc genhtml_branch_coverage=1 00:05:22.540 --rc genhtml_function_coverage=1 00:05:22.540 --rc genhtml_legend=1 00:05:22.540 --rc geninfo_all_blocks=1 00:05:22.540 --rc geninfo_unexecuted_blocks=1 00:05:22.540 00:05:22.540 ' 00:05:22.540 09:33:10 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.540 --rc genhtml_branch_coverage=1 00:05:22.540 --rc genhtml_function_coverage=1 00:05:22.540 --rc genhtml_legend=1 00:05:22.540 --rc geninfo_all_blocks=1 00:05:22.540 --rc geninfo_unexecuted_blocks=1 00:05:22.540 00:05:22.540 ' 00:05:22.540 09:33:10 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.540 --rc genhtml_branch_coverage=1 00:05:22.540 --rc genhtml_function_coverage=1 00:05:22.540 --rc genhtml_legend=1 00:05:22.540 --rc geninfo_all_blocks=1 00:05:22.540 --rc geninfo_unexecuted_blocks=1 00:05:22.540 00:05:22.540 ' 00:05:22.540 09:33:10 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:22.540 09:33:10 -- nvmf/common.sh@7 -- # uname -s 00:05:22.540 09:33:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:22.540 09:33:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:22.540 09:33:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:22.540 09:33:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:22.540 09:33:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:22.540 09:33:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:22.540 09:33:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:22.540 09:33:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:22.540 09:33:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:22.540 09:33:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:22.799 09:33:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:05:22.799 09:33:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:05:22.799 09:33:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:22.799 09:33:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:22.799 09:33:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:22.799 09:33:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:22.799 09:33:10 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:22.799 09:33:10 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:22.799 09:33:10 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:22.799 09:33:10 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:22.799 09:33:10 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:22.799 09:33:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.799 09:33:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.799 09:33:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.799 09:33:10 -- paths/export.sh@5 -- # export PATH 00:05:22.799 09:33:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.799 09:33:10 -- nvmf/common.sh@51 -- # : 0 00:05:22.799 09:33:10 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:22.799 09:33:10 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:22.799 09:33:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:22.799 09:33:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:22.799 09:33:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:22.799 09:33:10 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:22.799 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:22.799 09:33:10 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:22.799 09:33:10 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:22.799 09:33:10 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:22.799 09:33:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:22.799 09:33:10 -- spdk/autotest.sh@32 -- # uname -s 00:05:22.799 09:33:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:22.799 09:33:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:22.799 09:33:10 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:22.799 09:33:10 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:22.799 09:33:10 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:22.799 09:33:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:22.799 09:33:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:22.799 09:33:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:22.799 09:33:10 -- spdk/autotest.sh@48 -- # udevadm_pid=54385 00:05:22.799 09:33:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:22.799 09:33:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:22.799 09:33:10 -- pm/common@17 -- # local monitor 00:05:22.799 09:33:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:22.799 09:33:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:22.799 09:33:10 -- pm/common@25 -- # sleep 1 00:05:22.799 09:33:10 -- pm/common@21 -- # date +%s 00:05:22.799 09:33:10 -- pm/common@21 -- # date +%s 00:05:22.799 09:33:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732008790 00:05:22.799 09:33:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732008790 00:05:22.799 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732008790_collect-cpu-load.pm.log 00:05:22.799 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732008790_collect-vmstat.pm.log 00:05:23.736 09:33:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:23.736 09:33:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:23.736 09:33:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.736 09:33:11 -- common/autotest_common.sh@10 -- # set +x 00:05:23.736 09:33:11 -- spdk/autotest.sh@59 -- # create_test_list 00:05:23.736 09:33:11 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:23.736 09:33:11 -- common/autotest_common.sh@10 -- # set +x 00:05:23.736 09:33:11 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:23.737 09:33:11 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:23.737 09:33:11 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:23.737 09:33:11 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:23.737 09:33:11 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:23.737 09:33:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:23.737 09:33:11 -- common/autotest_common.sh@1457 -- # uname 00:05:23.737 09:33:11 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:23.737 09:33:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:23.737 09:33:11 -- common/autotest_common.sh@1477 -- # uname 00:05:23.737 09:33:11 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:23.737 09:33:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:23.737 09:33:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:23.995 lcov: LCOV version 1.15 00:05:23.995 09:33:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:42.106 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:42.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:00.192 09:33:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:00.192 09:33:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.192 09:33:47 -- common/autotest_common.sh@10 -- # set +x 00:06:00.192 09:33:47 -- spdk/autotest.sh@78 -- # rm -f 00:06:00.192 09:33:47 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:00.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:00.708 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:00.708 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:00.708 09:33:48 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:00.708 09:33:48 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:00.708 09:33:48 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:00.708 09:33:48 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:00.708 09:33:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:00.708 09:33:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:00.708 09:33:48 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:00.708 09:33:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:00.708 09:33:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:00.708 09:33:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:00.708 09:33:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:00.708 09:33:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:00.708 09:33:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:00.708 09:33:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:00.708 09:33:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:00.708 09:33:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:06:00.708 09:33:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:00.708 09:33:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:00.708 09:33:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:00.708 09:33:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:00.708 09:33:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:06:00.708 09:33:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:00.708 09:33:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:00.708 09:33:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:00.708 09:33:48 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:00.708 09:33:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:00.708 09:33:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:00.708 09:33:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:00.708 09:33:48 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:00.708 09:33:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:00.708 No valid GPT data, bailing 00:06:00.708 09:33:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:00.708 09:33:48 -- scripts/common.sh@394 -- # pt= 00:06:00.708 09:33:48 -- scripts/common.sh@395 -- # return 1 00:06:00.708 09:33:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:00.708 1+0 records in 00:06:00.708 1+0 records out 00:06:00.708 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044081 s, 238 MB/s 00:06:00.708 09:33:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:00.708 09:33:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:00.708 09:33:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:00.708 09:33:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:00.708 09:33:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:00.708 No valid GPT data, bailing 00:06:00.708 09:33:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:00.967 09:33:48 -- scripts/common.sh@394 -- # pt= 00:06:00.967 09:33:48 -- scripts/common.sh@395 -- # return 1 00:06:00.967 09:33:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:00.967 1+0 records in 00:06:00.967 1+0 records out 00:06:00.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00389172 s, 269 MB/s 00:06:00.967 09:33:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:00.967 09:33:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:00.967 09:33:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:00.967 09:33:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:00.967 09:33:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:00.967 No valid GPT data, bailing 00:06:00.967 09:33:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:00.967 09:33:48 -- scripts/common.sh@394 -- # pt= 00:06:00.967 09:33:48 -- scripts/common.sh@395 -- # return 1 00:06:00.967 09:33:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:00.967 1+0 records in 00:06:00.967 1+0 records out 00:06:00.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420332 s, 249 MB/s 00:06:00.967 09:33:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:00.967 09:33:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:00.967 09:33:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:00.967 09:33:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:00.967 09:33:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:00.967 No valid GPT data, bailing 00:06:00.967 09:33:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:00.967 09:33:48 -- scripts/common.sh@394 -- # pt= 00:06:00.967 09:33:48 -- scripts/common.sh@395 -- # return 1 00:06:00.967 09:33:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:00.967 1+0 records in 00:06:00.967 1+0 records out 00:06:00.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433616 s, 242 MB/s 00:06:00.967 09:33:48 -- spdk/autotest.sh@105 -- # sync 00:06:01.226 09:33:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:01.226 09:33:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:01.227 09:33:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:03.128 09:33:50 -- spdk/autotest.sh@111 -- # uname -s 00:06:03.128 09:33:50 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:03.128 09:33:50 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:03.128 09:33:50 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:03.477 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:03.477 Hugepages 00:06:03.477 node hugesize free / total 00:06:03.477 node0 1048576kB 0 / 0 00:06:03.477 node0 2048kB 0 / 0 00:06:03.477 00:06:03.477 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:03.477 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:03.477 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:03.735 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:03.735 09:33:51 -- spdk/autotest.sh@117 -- # uname -s 00:06:03.735 09:33:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:03.735 09:33:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:03.735 09:33:51 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:04.301 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:04.301 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.301 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.559 09:33:51 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:05.493 09:33:52 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:05.493 09:33:52 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:05.493 09:33:52 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:05.493 09:33:52 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:05.493 09:33:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:05.493 09:33:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:05.493 09:33:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:05.493 09:33:52 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:05.493 09:33:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:05.493 09:33:53 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:05.493 09:33:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:05.493 09:33:53 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:05.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:05.752 Waiting for block devices as requested 00:06:05.752 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:06.010 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:06.010 09:33:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:06.010 09:33:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:06.010 09:33:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:06.010 09:33:53 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:06.010 09:33:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:06.010 09:33:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:06.010 09:33:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:06.010 09:33:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:06.010 09:33:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:06.010 09:33:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:06.010 09:33:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:06.010 09:33:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:06.010 09:33:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:06.010 09:33:53 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:06.010 09:33:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:06.010 09:33:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:06.010 09:33:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:06.010 09:33:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:06.010 09:33:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:06.010 09:33:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:06.010 09:33:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:06.010 09:33:53 -- common/autotest_common.sh@1543 -- # continue 00:06:06.010 09:33:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:06.010 09:33:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:06.010 09:33:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:06.010 09:33:53 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:06.010 09:33:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:06.010 09:33:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:06.010 09:33:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:06.010 09:33:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:06.010 09:33:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:06.010 09:33:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:06.010 09:33:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:06.010 09:33:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:06.010 09:33:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:06.010 09:33:53 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:06.010 09:33:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:06.010 09:33:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:06.010 09:33:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:06.010 09:33:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:06.010 09:33:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:06.010 09:33:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:06.010 09:33:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:06.010 09:33:53 -- common/autotest_common.sh@1543 -- # continue 00:06:06.010 09:33:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:06.010 09:33:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.010 09:33:53 -- common/autotest_common.sh@10 -- # set +x 00:06:06.269 09:33:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:06.269 09:33:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.269 09:33:53 -- common/autotest_common.sh@10 -- # set +x 00:06:06.269 09:33:53 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:06.835 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:06.835 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:06.835 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:07.094 09:33:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:07.094 09:33:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:07.094 09:33:54 -- common/autotest_common.sh@10 -- # set +x 00:06:07.094 09:33:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:07.094 09:33:54 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:07.094 09:33:54 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:07.094 09:33:54 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:07.094 09:33:54 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:07.094 09:33:54 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:07.094 09:33:54 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:07.094 09:33:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:07.094 09:33:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:07.094 09:33:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:07.094 09:33:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:07.094 09:33:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:07.094 09:33:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:07.094 09:33:54 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:07.094 09:33:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:07.094 09:33:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:07.094 09:33:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:07.094 09:33:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:07.094 09:33:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:07.094 09:33:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:07.095 09:33:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:07.095 09:33:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:07.095 09:33:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:07.095 09:33:54 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:07.095 09:33:54 -- common/autotest_common.sh@1572 -- # return 0 00:06:07.095 09:33:54 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:07.095 09:33:54 -- common/autotest_common.sh@1580 -- # return 0 00:06:07.095 09:33:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:07.095 09:33:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:07.095 09:33:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:07.095 09:33:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:07.095 09:33:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:07.095 09:33:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.095 09:33:54 -- common/autotest_common.sh@10 -- # set +x 00:06:07.095 09:33:54 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:07.095 09:33:54 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:07.095 09:33:54 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:07.095 09:33:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:07.095 09:33:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.095 09:33:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.095 09:33:54 -- common/autotest_common.sh@10 -- # set +x 00:06:07.095 ************************************ 00:06:07.095 START TEST env 00:06:07.095 ************************************ 00:06:07.095 09:33:54 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:07.095 * Looking for test storage... 00:06:07.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:07.095 09:33:54 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.095 09:33:54 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.095 09:33:54 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.354 09:33:54 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.354 09:33:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.354 09:33:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.354 09:33:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.354 09:33:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.354 09:33:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.354 09:33:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.354 09:33:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.354 09:33:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.354 09:33:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.354 09:33:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.354 09:33:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.354 09:33:54 env -- scripts/common.sh@344 -- # case "$op" in 00:06:07.354 09:33:54 env -- scripts/common.sh@345 -- # : 1 00:06:07.354 09:33:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.354 09:33:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.354 09:33:54 env -- scripts/common.sh@365 -- # decimal 1 00:06:07.354 09:33:54 env -- scripts/common.sh@353 -- # local d=1 00:06:07.354 09:33:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.354 09:33:54 env -- scripts/common.sh@355 -- # echo 1 00:06:07.354 09:33:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.354 09:33:54 env -- scripts/common.sh@366 -- # decimal 2 00:06:07.354 09:33:54 env -- scripts/common.sh@353 -- # local d=2 00:06:07.354 09:33:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.354 09:33:54 env -- scripts/common.sh@355 -- # echo 2 00:06:07.354 09:33:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.354 09:33:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.354 09:33:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.354 09:33:54 env -- scripts/common.sh@368 -- # return 0 00:06:07.354 09:33:54 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.354 09:33:54 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.354 --rc genhtml_branch_coverage=1 00:06:07.354 --rc genhtml_function_coverage=1 00:06:07.354 --rc genhtml_legend=1 00:06:07.354 --rc geninfo_all_blocks=1 00:06:07.354 --rc geninfo_unexecuted_blocks=1 00:06:07.354 00:06:07.354 ' 00:06:07.354 09:33:54 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.354 --rc genhtml_branch_coverage=1 00:06:07.354 --rc genhtml_function_coverage=1 00:06:07.354 --rc genhtml_legend=1 00:06:07.354 --rc geninfo_all_blocks=1 00:06:07.354 --rc geninfo_unexecuted_blocks=1 00:06:07.354 00:06:07.354 ' 00:06:07.354 09:33:54 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.354 --rc genhtml_branch_coverage=1 00:06:07.354 --rc genhtml_function_coverage=1 00:06:07.354 --rc genhtml_legend=1 00:06:07.354 --rc geninfo_all_blocks=1 00:06:07.354 --rc geninfo_unexecuted_blocks=1 00:06:07.354 00:06:07.354 ' 00:06:07.354 09:33:54 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.354 --rc genhtml_branch_coverage=1 00:06:07.354 --rc genhtml_function_coverage=1 00:06:07.354 --rc genhtml_legend=1 00:06:07.354 --rc geninfo_all_blocks=1 00:06:07.354 --rc geninfo_unexecuted_blocks=1 00:06:07.354 00:06:07.354 ' 00:06:07.354 09:33:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:07.354 09:33:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.355 09:33:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.355 09:33:54 env -- common/autotest_common.sh@10 -- # set +x 00:06:07.355 ************************************ 00:06:07.355 START TEST env_memory 00:06:07.355 ************************************ 00:06:07.355 09:33:54 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:07.355 00:06:07.355 00:06:07.355 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.355 http://cunit.sourceforge.net/ 00:06:07.355 00:06:07.355 00:06:07.355 Suite: memory 00:06:07.355 Test: alloc and free memory map ...[2024-11-19 09:33:54.863580] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:07.355 passed 00:06:07.355 Test: mem map translation ...[2024-11-19 09:33:54.892695] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:07.355 [2024-11-19 09:33:54.892788] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:07.355 [2024-11-19 09:33:54.892860] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:07.355 [2024-11-19 09:33:54.892881] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:07.355 passed 00:06:07.355 Test: mem map registration ...[2024-11-19 09:33:54.953949] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:07.355 [2024-11-19 09:33:54.954040] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:07.355 passed 00:06:07.615 Test: mem map adjacent registrations ...passed 00:06:07.615 00:06:07.615 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.615 suites 1 1 n/a 0 0 00:06:07.615 tests 4 4 4 0 0 00:06:07.615 asserts 152 152 152 0 n/a 00:06:07.615 00:06:07.615 Elapsed time = 0.190 seconds 00:06:07.615 00:06:07.615 real 0m0.206s 00:06:07.615 user 0m0.186s 00:06:07.615 sys 0m0.017s 00:06:07.615 09:33:55 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.615 09:33:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:07.615 ************************************ 00:06:07.615 END TEST env_memory 00:06:07.615 ************************************ 00:06:07.615 09:33:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:07.615 09:33:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.615 09:33:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.615 09:33:55 env -- common/autotest_common.sh@10 -- # set +x 00:06:07.615 ************************************ 00:06:07.615 START TEST env_vtophys 00:06:07.615 ************************************ 00:06:07.615 09:33:55 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:07.615 EAL: lib.eal log level changed from notice to debug 00:06:07.615 EAL: Detected lcore 0 as core 0 on socket 0 00:06:07.615 EAL: Detected lcore 1 as core 0 on socket 0 00:06:07.615 EAL: Detected lcore 2 as core 0 on socket 0 00:06:07.615 EAL: Detected lcore 3 as core 0 on socket 0 00:06:07.615 EAL: Detected lcore 4 as core 0 on socket 0 00:06:07.615 EAL: Detected lcore 5 as core 0 on socket 0 00:06:07.615 EAL: Detected lcore 6 as core 0 on socket 0 00:06:07.615 EAL: Detected lcore 7 as core 0 on socket 0 00:06:07.615 EAL: Detected lcore 8 as core 0 on socket 0 00:06:07.615 EAL: Detected lcore 9 as core 0 on socket 0 00:06:07.615 EAL: Maximum logical cores by configuration: 128 00:06:07.615 EAL: Detected CPU lcores: 10 00:06:07.615 EAL: Detected NUMA nodes: 1 00:06:07.615 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:07.615 EAL: Detected shared linkage of DPDK 00:06:07.615 EAL: No shared files mode enabled, IPC will be disabled 00:06:07.615 EAL: Selected IOVA mode 'PA' 00:06:07.615 EAL: Probing VFIO support... 00:06:07.615 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:07.615 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:07.615 EAL: Ask a virtual area of 0x2e000 bytes 00:06:07.615 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:07.615 EAL: Setting up physically contiguous memory... 00:06:07.615 EAL: Setting maximum number of open files to 524288 00:06:07.615 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:07.615 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:07.615 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.615 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:07.615 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.615 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.615 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:07.615 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:07.615 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.615 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:07.615 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.616 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.616 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:07.616 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:07.616 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.616 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:07.616 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.616 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.616 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:07.616 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:07.616 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.616 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:07.616 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.616 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.616 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:07.616 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:07.616 EAL: Hugepages will be freed exactly as allocated. 00:06:07.616 EAL: No shared files mode enabled, IPC is disabled 00:06:07.616 EAL: No shared files mode enabled, IPC is disabled 00:06:07.616 EAL: TSC frequency is ~2200000 KHz 00:06:07.616 EAL: Main lcore 0 is ready (tid=7fe5765f5a00;cpuset=[0]) 00:06:07.616 EAL: Trying to obtain current memory policy. 00:06:07.616 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.616 EAL: Restoring previous memory policy: 0 00:06:07.616 EAL: request: mp_malloc_sync 00:06:07.616 EAL: No shared files mode enabled, IPC is disabled 00:06:07.616 EAL: Heap on socket 0 was expanded by 2MB 00:06:07.616 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:07.616 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:07.616 EAL: Mem event callback 'spdk:(nil)' registered 00:06:07.616 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:07.874 00:06:07.874 00:06:07.874 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.874 http://cunit.sourceforge.net/ 00:06:07.874 00:06:07.874 00:06:07.874 Suite: components_suite 00:06:07.874 Test: vtophys_malloc_test ...passed 00:06:07.874 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:07.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.874 EAL: Restoring previous memory policy: 4 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was expanded by 4MB 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was shrunk by 4MB 00:06:07.874 EAL: Trying to obtain current memory policy. 00:06:07.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.874 EAL: Restoring previous memory policy: 4 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was expanded by 6MB 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was shrunk by 6MB 00:06:07.874 EAL: Trying to obtain current memory policy. 00:06:07.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.874 EAL: Restoring previous memory policy: 4 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was expanded by 10MB 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was shrunk by 10MB 00:06:07.874 EAL: Trying to obtain current memory policy. 00:06:07.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.874 EAL: Restoring previous memory policy: 4 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was expanded by 18MB 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was shrunk by 18MB 00:06:07.874 EAL: Trying to obtain current memory policy. 00:06:07.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.874 EAL: Restoring previous memory policy: 4 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was expanded by 34MB 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was shrunk by 34MB 00:06:07.874 EAL: Trying to obtain current memory policy. 00:06:07.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.874 EAL: Restoring previous memory policy: 4 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was expanded by 66MB 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was shrunk by 66MB 00:06:07.874 EAL: Trying to obtain current memory policy. 00:06:07.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.874 EAL: Restoring previous memory policy: 4 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was expanded by 130MB 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was shrunk by 130MB 00:06:07.874 EAL: Trying to obtain current memory policy. 00:06:07.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.874 EAL: Restoring previous memory policy: 4 00:06:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.874 EAL: request: mp_malloc_sync 00:06:07.874 EAL: No shared files mode enabled, IPC is disabled 00:06:07.874 EAL: Heap on socket 0 was expanded by 258MB 00:06:08.132 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.132 EAL: request: mp_malloc_sync 00:06:08.132 EAL: No shared files mode enabled, IPC is disabled 00:06:08.132 EAL: Heap on socket 0 was shrunk by 258MB 00:06:08.132 EAL: Trying to obtain current memory policy. 00:06:08.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.132 EAL: Restoring previous memory policy: 4 00:06:08.132 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.132 EAL: request: mp_malloc_sync 00:06:08.132 EAL: No shared files mode enabled, IPC is disabled 00:06:08.132 EAL: Heap on socket 0 was expanded by 514MB 00:06:08.389 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.389 EAL: request: mp_malloc_sync 00:06:08.389 EAL: No shared files mode enabled, IPC is disabled 00:06:08.389 EAL: Heap on socket 0 was shrunk by 514MB 00:06:08.389 EAL: Trying to obtain current memory policy. 00:06:08.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.648 EAL: Restoring previous memory policy: 4 00:06:08.648 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.648 EAL: request: mp_malloc_sync 00:06:08.648 EAL: No shared files mode enabled, IPC is disabled 00:06:08.648 EAL: Heap on socket 0 was expanded by 1026MB 00:06:08.907 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.166 passed 00:06:09.166 00:06:09.166 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.166 suites 1 1 n/a 0 0 00:06:09.166 tests 2 2 2 0 0 00:06:09.166 asserts 5603 5603 5603 0 n/a 00:06:09.166 00:06:09.166 Elapsed time = 1.278 seconds 00:06:09.166 EAL: request: mp_malloc_sync 00:06:09.166 EAL: No shared files mode enabled, IPC is disabled 00:06:09.166 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:09.166 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.166 EAL: request: mp_malloc_sync 00:06:09.166 EAL: No shared files mode enabled, IPC is disabled 00:06:09.166 EAL: Heap on socket 0 was shrunk by 2MB 00:06:09.166 EAL: No shared files mode enabled, IPC is disabled 00:06:09.166 EAL: No shared files mode enabled, IPC is disabled 00:06:09.166 EAL: No shared files mode enabled, IPC is disabled 00:06:09.166 00:06:09.166 real 0m1.493s 00:06:09.166 user 0m0.795s 00:06:09.166 sys 0m0.555s 00:06:09.166 09:33:56 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.166 09:33:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:09.166 ************************************ 00:06:09.166 END TEST env_vtophys 00:06:09.166 ************************************ 00:06:09.166 09:33:56 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:09.166 09:33:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.166 09:33:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.166 09:33:56 env -- common/autotest_common.sh@10 -- # set +x 00:06:09.166 ************************************ 00:06:09.166 START TEST env_pci 00:06:09.166 ************************************ 00:06:09.166 09:33:56 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:09.166 00:06:09.166 00:06:09.166 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.166 http://cunit.sourceforge.net/ 00:06:09.166 00:06:09.166 00:06:09.166 Suite: pci 00:06:09.166 Test: pci_hook ...[2024-11-19 09:33:56.621554] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56649 has claimed it 00:06:09.166 passed 00:06:09.166 00:06:09.166 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.166 suites 1 1 n/a 0 0 00:06:09.166 tests 1 1 1 0 0 00:06:09.166 asserts 25 25 25 0 n/a 00:06:09.166 00:06:09.166 Elapsed time = 0.002 seconds 00:06:09.166 EAL: Cannot find device (10000:00:01.0) 00:06:09.166 EAL: Failed to attach device on primary process 00:06:09.166 00:06:09.166 real 0m0.018s 00:06:09.166 user 0m0.009s 00:06:09.166 sys 0m0.009s 00:06:09.166 09:33:56 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.166 09:33:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:09.166 ************************************ 00:06:09.166 END TEST env_pci 00:06:09.166 ************************************ 00:06:09.166 09:33:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:09.166 09:33:56 env -- env/env.sh@15 -- # uname 00:06:09.166 09:33:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:09.166 09:33:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:09.166 09:33:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:09.166 09:33:56 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:09.166 09:33:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.166 09:33:56 env -- common/autotest_common.sh@10 -- # set +x 00:06:09.166 ************************************ 00:06:09.166 START TEST env_dpdk_post_init 00:06:09.166 ************************************ 00:06:09.166 09:33:56 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:09.166 EAL: Detected CPU lcores: 10 00:06:09.166 EAL: Detected NUMA nodes: 1 00:06:09.166 EAL: Detected shared linkage of DPDK 00:06:09.166 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:09.166 EAL: Selected IOVA mode 'PA' 00:06:09.425 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:09.425 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:09.425 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:09.425 Starting DPDK initialization... 00:06:09.425 Starting SPDK post initialization... 00:06:09.425 SPDK NVMe probe 00:06:09.425 Attaching to 0000:00:10.0 00:06:09.425 Attaching to 0000:00:11.0 00:06:09.425 Attached to 0000:00:10.0 00:06:09.425 Attached to 0000:00:11.0 00:06:09.425 Cleaning up... 00:06:09.425 00:06:09.425 real 0m0.190s 00:06:09.425 user 0m0.057s 00:06:09.425 sys 0m0.033s 00:06:09.425 09:33:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.425 09:33:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:09.425 ************************************ 00:06:09.425 END TEST env_dpdk_post_init 00:06:09.425 ************************************ 00:06:09.425 09:33:56 env -- env/env.sh@26 -- # uname 00:06:09.425 09:33:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:09.425 09:33:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:09.425 09:33:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.425 09:33:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.425 09:33:56 env -- common/autotest_common.sh@10 -- # set +x 00:06:09.425 ************************************ 00:06:09.425 START TEST env_mem_callbacks 00:06:09.425 ************************************ 00:06:09.425 09:33:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:09.425 EAL: Detected CPU lcores: 10 00:06:09.425 EAL: Detected NUMA nodes: 1 00:06:09.425 EAL: Detected shared linkage of DPDK 00:06:09.425 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:09.425 EAL: Selected IOVA mode 'PA' 00:06:09.684 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:09.684 00:06:09.684 00:06:09.684 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.684 http://cunit.sourceforge.net/ 00:06:09.684 00:06:09.684 00:06:09.684 Suite: memory 00:06:09.684 Test: test ... 00:06:09.684 register 0x200000200000 2097152 00:06:09.684 malloc 3145728 00:06:09.684 register 0x200000400000 4194304 00:06:09.684 buf 0x200000500000 len 3145728 PASSED 00:06:09.684 malloc 64 00:06:09.684 buf 0x2000004fff40 len 64 PASSED 00:06:09.684 malloc 4194304 00:06:09.684 register 0x200000800000 6291456 00:06:09.684 buf 0x200000a00000 len 4194304 PASSED 00:06:09.684 free 0x200000500000 3145728 00:06:09.684 free 0x2000004fff40 64 00:06:09.684 unregister 0x200000400000 4194304 PASSED 00:06:09.684 free 0x200000a00000 4194304 00:06:09.684 unregister 0x200000800000 6291456 PASSED 00:06:09.684 malloc 8388608 00:06:09.684 register 0x200000400000 10485760 00:06:09.684 buf 0x200000600000 len 8388608 PASSED 00:06:09.684 free 0x200000600000 8388608 00:06:09.684 unregister 0x200000400000 10485760 PASSED 00:06:09.684 passed 00:06:09.684 00:06:09.684 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.684 suites 1 1 n/a 0 0 00:06:09.684 tests 1 1 1 0 0 00:06:09.684 asserts 15 15 15 0 n/a 00:06:09.684 00:06:09.684 Elapsed time = 0.007 seconds 00:06:09.684 00:06:09.684 real 0m0.143s 00:06:09.684 user 0m0.014s 00:06:09.684 sys 0m0.027s 00:06:09.684 09:33:57 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.684 09:33:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:09.684 ************************************ 00:06:09.684 END TEST env_mem_callbacks 00:06:09.684 ************************************ 00:06:09.684 00:06:09.684 real 0m2.474s 00:06:09.684 user 0m1.267s 00:06:09.684 sys 0m0.852s 00:06:09.684 09:33:57 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.684 09:33:57 env -- common/autotest_common.sh@10 -- # set +x 00:06:09.684 ************************************ 00:06:09.684 END TEST env 00:06:09.684 ************************************ 00:06:09.684 09:33:57 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:09.684 09:33:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.684 09:33:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.684 09:33:57 -- common/autotest_common.sh@10 -- # set +x 00:06:09.684 ************************************ 00:06:09.684 START TEST rpc 00:06:09.684 ************************************ 00:06:09.684 09:33:57 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:09.684 * Looking for test storage... 00:06:09.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:09.684 09:33:57 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.684 09:33:57 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.684 09:33:57 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.943 09:33:57 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.943 09:33:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.943 09:33:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.943 09:33:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.943 09:33:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.943 09:33:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.944 09:33:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.944 09:33:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.944 09:33:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.944 09:33:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.944 09:33:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.944 09:33:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.944 09:33:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:09.944 09:33:57 rpc -- scripts/common.sh@345 -- # : 1 00:06:09.944 09:33:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.944 09:33:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.944 09:33:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:09.944 09:33:57 rpc -- scripts/common.sh@353 -- # local d=1 00:06:09.944 09:33:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.944 09:33:57 rpc -- scripts/common.sh@355 -- # echo 1 00:06:09.944 09:33:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.944 09:33:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:09.944 09:33:57 rpc -- scripts/common.sh@353 -- # local d=2 00:06:09.944 09:33:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.944 09:33:57 rpc -- scripts/common.sh@355 -- # echo 2 00:06:09.944 09:33:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.944 09:33:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.944 09:33:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.944 09:33:57 rpc -- scripts/common.sh@368 -- # return 0 00:06:09.944 09:33:57 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.944 09:33:57 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.944 --rc genhtml_branch_coverage=1 00:06:09.944 --rc genhtml_function_coverage=1 00:06:09.944 --rc genhtml_legend=1 00:06:09.944 --rc geninfo_all_blocks=1 00:06:09.944 --rc geninfo_unexecuted_blocks=1 00:06:09.944 00:06:09.944 ' 00:06:09.944 09:33:57 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.944 --rc genhtml_branch_coverage=1 00:06:09.944 --rc genhtml_function_coverage=1 00:06:09.944 --rc genhtml_legend=1 00:06:09.944 --rc geninfo_all_blocks=1 00:06:09.944 --rc geninfo_unexecuted_blocks=1 00:06:09.944 00:06:09.944 ' 00:06:09.944 09:33:57 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.944 --rc genhtml_branch_coverage=1 00:06:09.944 --rc genhtml_function_coverage=1 00:06:09.944 --rc genhtml_legend=1 00:06:09.944 --rc geninfo_all_blocks=1 00:06:09.944 --rc geninfo_unexecuted_blocks=1 00:06:09.944 00:06:09.944 ' 00:06:09.944 09:33:57 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.944 --rc genhtml_branch_coverage=1 00:06:09.944 --rc genhtml_function_coverage=1 00:06:09.944 --rc genhtml_legend=1 00:06:09.944 --rc geninfo_all_blocks=1 00:06:09.944 --rc geninfo_unexecuted_blocks=1 00:06:09.944 00:06:09.944 ' 00:06:09.944 09:33:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56772 00:06:09.944 09:33:57 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:09.944 09:33:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.944 09:33:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56772 00:06:09.944 09:33:57 rpc -- common/autotest_common.sh@835 -- # '[' -z 56772 ']' 00:06:09.944 09:33:57 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.944 09:33:57 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.944 09:33:57 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.944 09:33:57 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.944 09:33:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.944 [2024-11-19 09:33:57.418509] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:09.944 [2024-11-19 09:33:57.418663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56772 ] 00:06:10.203 [2024-11-19 09:33:57.568084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.203 [2024-11-19 09:33:57.631195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:10.203 [2024-11-19 09:33:57.631258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56772' to capture a snapshot of events at runtime. 00:06:10.203 [2024-11-19 09:33:57.631278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:10.203 [2024-11-19 09:33:57.631287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:10.203 [2024-11-19 09:33:57.631294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56772 for offline analysis/debug. 00:06:10.203 [2024-11-19 09:33:57.631706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.203 [2024-11-19 09:33:57.705941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.461 09:33:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.461 09:33:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.461 09:33:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:10.461 09:33:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:10.461 09:33:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:10.461 09:33:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:10.461 09:33:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.461 09:33:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.461 09:33:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.461 ************************************ 00:06:10.461 START TEST rpc_integrity 00:06:10.461 ************************************ 00:06:10.461 09:33:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:10.461 09:33:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:10.461 09:33:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.461 09:33:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.461 09:33:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.461 09:33:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:10.461 09:33:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:10.462 09:33:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:10.462 09:33:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:10.462 09:33:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.462 09:33:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.462 09:33:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.462 09:33:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:10.462 09:33:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:10.462 09:33:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.462 09:33:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.462 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.462 09:33:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:10.462 { 00:06:10.462 "name": "Malloc0", 00:06:10.462 "aliases": [ 00:06:10.462 "e1cb3d75-ad36-4f74-b3d1-7c6f6c243fc5" 00:06:10.462 ], 00:06:10.462 "product_name": "Malloc disk", 00:06:10.462 "block_size": 512, 00:06:10.462 "num_blocks": 16384, 00:06:10.462 "uuid": "e1cb3d75-ad36-4f74-b3d1-7c6f6c243fc5", 00:06:10.462 "assigned_rate_limits": { 00:06:10.462 "rw_ios_per_sec": 0, 00:06:10.462 "rw_mbytes_per_sec": 0, 00:06:10.462 "r_mbytes_per_sec": 0, 00:06:10.462 "w_mbytes_per_sec": 0 00:06:10.462 }, 00:06:10.462 "claimed": false, 00:06:10.462 "zoned": false, 00:06:10.462 "supported_io_types": { 00:06:10.462 "read": true, 00:06:10.462 "write": true, 00:06:10.462 "unmap": true, 00:06:10.462 "flush": true, 00:06:10.462 "reset": true, 00:06:10.462 "nvme_admin": false, 00:06:10.462 "nvme_io": false, 00:06:10.462 "nvme_io_md": false, 00:06:10.462 "write_zeroes": true, 00:06:10.462 "zcopy": true, 00:06:10.462 "get_zone_info": false, 00:06:10.462 "zone_management": false, 00:06:10.462 "zone_append": false, 00:06:10.462 "compare": false, 00:06:10.462 "compare_and_write": false, 00:06:10.462 "abort": true, 00:06:10.462 "seek_hole": false, 00:06:10.462 "seek_data": false, 00:06:10.462 "copy": true, 00:06:10.462 "nvme_iov_md": false 00:06:10.462 }, 00:06:10.462 "memory_domains": [ 00:06:10.462 { 00:06:10.462 "dma_device_id": "system", 00:06:10.462 "dma_device_type": 1 00:06:10.462 }, 00:06:10.462 { 00:06:10.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.462 "dma_device_type": 2 00:06:10.462 } 00:06:10.462 ], 00:06:10.462 "driver_specific": {} 00:06:10.462 } 00:06:10.462 ]' 00:06:10.462 09:33:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:10.462 09:33:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:10.462 09:33:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:10.462 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.462 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.462 [2024-11-19 09:33:58.068756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:10.462 [2024-11-19 09:33:58.068813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:10.462 [2024-11-19 09:33:58.068835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc44f20 00:06:10.462 [2024-11-19 09:33:58.068845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:10.462 [2024-11-19 09:33:58.070643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:10.462 [2024-11-19 09:33:58.070676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:10.462 Passthru0 00:06:10.462 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.462 09:33:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:10.462 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.462 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.720 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.720 09:33:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:10.720 { 00:06:10.720 "name": "Malloc0", 00:06:10.720 "aliases": [ 00:06:10.720 "e1cb3d75-ad36-4f74-b3d1-7c6f6c243fc5" 00:06:10.720 ], 00:06:10.720 "product_name": "Malloc disk", 00:06:10.720 "block_size": 512, 00:06:10.720 "num_blocks": 16384, 00:06:10.720 "uuid": "e1cb3d75-ad36-4f74-b3d1-7c6f6c243fc5", 00:06:10.720 "assigned_rate_limits": { 00:06:10.720 "rw_ios_per_sec": 0, 00:06:10.720 "rw_mbytes_per_sec": 0, 00:06:10.720 "r_mbytes_per_sec": 0, 00:06:10.720 "w_mbytes_per_sec": 0 00:06:10.720 }, 00:06:10.720 "claimed": true, 00:06:10.720 "claim_type": "exclusive_write", 00:06:10.720 "zoned": false, 00:06:10.720 "supported_io_types": { 00:06:10.720 "read": true, 00:06:10.720 "write": true, 00:06:10.720 "unmap": true, 00:06:10.720 "flush": true, 00:06:10.720 "reset": true, 00:06:10.720 "nvme_admin": false, 00:06:10.720 "nvme_io": false, 00:06:10.720 "nvme_io_md": false, 00:06:10.720 "write_zeroes": true, 00:06:10.720 "zcopy": true, 00:06:10.720 "get_zone_info": false, 00:06:10.720 "zone_management": false, 00:06:10.720 "zone_append": false, 00:06:10.720 "compare": false, 00:06:10.720 "compare_and_write": false, 00:06:10.720 "abort": true, 00:06:10.720 "seek_hole": false, 00:06:10.720 "seek_data": false, 00:06:10.720 "copy": true, 00:06:10.720 "nvme_iov_md": false 00:06:10.720 }, 00:06:10.720 "memory_domains": [ 00:06:10.720 { 00:06:10.720 "dma_device_id": "system", 00:06:10.720 "dma_device_type": 1 00:06:10.720 }, 00:06:10.720 { 00:06:10.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.720 "dma_device_type": 2 00:06:10.720 } 00:06:10.720 ], 00:06:10.720 "driver_specific": {} 00:06:10.720 }, 00:06:10.720 { 00:06:10.720 "name": "Passthru0", 00:06:10.720 "aliases": [ 00:06:10.720 "4cec5537-b7a7-54c5-b9a3-6a11bcc35382" 00:06:10.720 ], 00:06:10.720 "product_name": "passthru", 00:06:10.720 "block_size": 512, 00:06:10.720 "num_blocks": 16384, 00:06:10.720 "uuid": "4cec5537-b7a7-54c5-b9a3-6a11bcc35382", 00:06:10.720 "assigned_rate_limits": { 00:06:10.720 "rw_ios_per_sec": 0, 00:06:10.720 "rw_mbytes_per_sec": 0, 00:06:10.720 "r_mbytes_per_sec": 0, 00:06:10.720 "w_mbytes_per_sec": 0 00:06:10.720 }, 00:06:10.720 "claimed": false, 00:06:10.720 "zoned": false, 00:06:10.720 "supported_io_types": { 00:06:10.720 "read": true, 00:06:10.720 "write": true, 00:06:10.720 "unmap": true, 00:06:10.720 "flush": true, 00:06:10.720 "reset": true, 00:06:10.720 "nvme_admin": false, 00:06:10.720 "nvme_io": false, 00:06:10.720 "nvme_io_md": false, 00:06:10.720 "write_zeroes": true, 00:06:10.720 "zcopy": true, 00:06:10.720 "get_zone_info": false, 00:06:10.720 "zone_management": false, 00:06:10.720 "zone_append": false, 00:06:10.720 "compare": false, 00:06:10.720 "compare_and_write": false, 00:06:10.720 "abort": true, 00:06:10.720 "seek_hole": false, 00:06:10.720 "seek_data": false, 00:06:10.720 "copy": true, 00:06:10.720 "nvme_iov_md": false 00:06:10.720 }, 00:06:10.720 "memory_domains": [ 00:06:10.720 { 00:06:10.720 "dma_device_id": "system", 00:06:10.720 "dma_device_type": 1 00:06:10.720 }, 00:06:10.720 { 00:06:10.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.720 "dma_device_type": 2 00:06:10.720 } 00:06:10.720 ], 00:06:10.720 "driver_specific": { 00:06:10.720 "passthru": { 00:06:10.720 "name": "Passthru0", 00:06:10.720 "base_bdev_name": "Malloc0" 00:06:10.720 } 00:06:10.720 } 00:06:10.720 } 00:06:10.720 ]' 00:06:10.721 09:33:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:10.721 09:33:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:10.721 09:33:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:10.721 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.721 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.721 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.721 09:33:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:10.721 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.721 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.721 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.721 09:33:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:10.721 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.721 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.721 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.721 09:33:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:10.721 09:33:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:10.721 09:33:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:10.721 00:06:10.721 real 0m0.327s 00:06:10.721 user 0m0.239s 00:06:10.721 sys 0m0.019s 00:06:10.721 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.721 09:33:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.721 ************************************ 00:06:10.721 END TEST rpc_integrity 00:06:10.721 ************************************ 00:06:10.721 09:33:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:10.721 09:33:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.721 09:33:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.721 09:33:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.721 ************************************ 00:06:10.721 START TEST rpc_plugins 00:06:10.721 ************************************ 00:06:10.721 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:10.721 09:33:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:10.721 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.721 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:10.721 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.721 09:33:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:10.721 09:33:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:10.721 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.721 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:10.721 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.721 09:33:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:10.721 { 00:06:10.721 "name": "Malloc1", 00:06:10.721 "aliases": [ 00:06:10.721 "e6b1db73-5747-45ce-828b-d5ec4a2d22d5" 00:06:10.721 ], 00:06:10.721 "product_name": "Malloc disk", 00:06:10.721 "block_size": 4096, 00:06:10.721 "num_blocks": 256, 00:06:10.721 "uuid": "e6b1db73-5747-45ce-828b-d5ec4a2d22d5", 00:06:10.721 "assigned_rate_limits": { 00:06:10.721 "rw_ios_per_sec": 0, 00:06:10.721 "rw_mbytes_per_sec": 0, 00:06:10.721 "r_mbytes_per_sec": 0, 00:06:10.721 "w_mbytes_per_sec": 0 00:06:10.721 }, 00:06:10.721 "claimed": false, 00:06:10.721 "zoned": false, 00:06:10.721 "supported_io_types": { 00:06:10.721 "read": true, 00:06:10.721 "write": true, 00:06:10.721 "unmap": true, 00:06:10.721 "flush": true, 00:06:10.721 "reset": true, 00:06:10.721 "nvme_admin": false, 00:06:10.721 "nvme_io": false, 00:06:10.721 "nvme_io_md": false, 00:06:10.721 "write_zeroes": true, 00:06:10.721 "zcopy": true, 00:06:10.721 "get_zone_info": false, 00:06:10.721 "zone_management": false, 00:06:10.721 "zone_append": false, 00:06:10.721 "compare": false, 00:06:10.721 "compare_and_write": false, 00:06:10.721 "abort": true, 00:06:10.721 "seek_hole": false, 00:06:10.721 "seek_data": false, 00:06:10.721 "copy": true, 00:06:10.721 "nvme_iov_md": false 00:06:10.721 }, 00:06:10.721 "memory_domains": [ 00:06:10.721 { 00:06:10.721 "dma_device_id": "system", 00:06:10.721 "dma_device_type": 1 00:06:10.721 }, 00:06:10.721 { 00:06:10.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.721 "dma_device_type": 2 00:06:10.721 } 00:06:10.721 ], 00:06:10.721 "driver_specific": {} 00:06:10.721 } 00:06:10.721 ]' 00:06:10.721 09:33:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:10.979 09:33:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:10.979 09:33:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:10.979 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.979 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:10.979 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.979 09:33:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:10.979 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.979 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:10.979 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.979 09:33:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:10.979 09:33:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:10.979 ************************************ 00:06:10.979 END TEST rpc_plugins 00:06:10.979 ************************************ 00:06:10.979 09:33:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:10.979 00:06:10.979 real 0m0.171s 00:06:10.979 user 0m0.115s 00:06:10.979 sys 0m0.014s 00:06:10.979 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.979 09:33:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:10.979 09:33:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:10.979 09:33:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.979 09:33:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.979 09:33:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.979 ************************************ 00:06:10.979 START TEST rpc_trace_cmd_test 00:06:10.979 ************************************ 00:06:10.979 09:33:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:10.979 09:33:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:10.979 09:33:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:10.979 09:33:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.979 09:33:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.979 09:33:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.979 09:33:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:10.979 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56772", 00:06:10.979 "tpoint_group_mask": "0x8", 00:06:10.979 "iscsi_conn": { 00:06:10.979 "mask": "0x2", 00:06:10.979 "tpoint_mask": "0x0" 00:06:10.979 }, 00:06:10.979 "scsi": { 00:06:10.979 "mask": "0x4", 00:06:10.979 "tpoint_mask": "0x0" 00:06:10.979 }, 00:06:10.979 "bdev": { 00:06:10.979 "mask": "0x8", 00:06:10.979 "tpoint_mask": "0xffffffffffffffff" 00:06:10.979 }, 00:06:10.979 "nvmf_rdma": { 00:06:10.979 "mask": "0x10", 00:06:10.979 "tpoint_mask": "0x0" 00:06:10.979 }, 00:06:10.979 "nvmf_tcp": { 00:06:10.979 "mask": "0x20", 00:06:10.979 "tpoint_mask": "0x0" 00:06:10.979 }, 00:06:10.980 "ftl": { 00:06:10.980 "mask": "0x40", 00:06:10.980 "tpoint_mask": "0x0" 00:06:10.980 }, 00:06:10.980 "blobfs": { 00:06:10.980 "mask": "0x80", 00:06:10.980 "tpoint_mask": "0x0" 00:06:10.980 }, 00:06:10.980 "dsa": { 00:06:10.980 "mask": "0x200", 00:06:10.980 "tpoint_mask": "0x0" 00:06:10.980 }, 00:06:10.980 "thread": { 00:06:10.980 "mask": "0x400", 00:06:10.980 "tpoint_mask": "0x0" 00:06:10.980 }, 00:06:10.980 "nvme_pcie": { 00:06:10.980 "mask": "0x800", 00:06:10.980 "tpoint_mask": "0x0" 00:06:10.980 }, 00:06:10.980 "iaa": { 00:06:10.980 "mask": "0x1000", 00:06:10.980 "tpoint_mask": "0x0" 00:06:10.980 }, 00:06:10.980 "nvme_tcp": { 00:06:10.980 "mask": "0x2000", 00:06:10.980 "tpoint_mask": "0x0" 00:06:10.980 }, 00:06:10.980 "bdev_nvme": { 00:06:10.980 "mask": "0x4000", 00:06:10.980 "tpoint_mask": "0x0" 00:06:10.980 }, 00:06:10.980 "sock": { 00:06:10.980 "mask": "0x8000", 00:06:10.980 "tpoint_mask": "0x0" 00:06:10.980 }, 00:06:10.980 "blob": { 00:06:10.980 "mask": "0x10000", 00:06:10.980 "tpoint_mask": "0x0" 00:06:10.980 }, 00:06:10.980 "bdev_raid": { 00:06:10.980 "mask": "0x20000", 00:06:10.980 "tpoint_mask": "0x0" 00:06:10.980 }, 00:06:10.980 "scheduler": { 00:06:10.980 "mask": "0x40000", 00:06:10.980 "tpoint_mask": "0x0" 00:06:10.980 } 00:06:10.980 }' 00:06:10.980 09:33:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:10.980 09:33:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:10.980 09:33:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:11.239 09:33:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:11.239 09:33:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:11.239 09:33:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:11.239 09:33:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:11.239 09:33:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:11.239 09:33:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:11.239 ************************************ 00:06:11.239 END TEST rpc_trace_cmd_test 00:06:11.239 ************************************ 00:06:11.239 09:33:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:11.239 00:06:11.239 real 0m0.250s 00:06:11.239 user 0m0.217s 00:06:11.239 sys 0m0.022s 00:06:11.239 09:33:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.239 09:33:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.239 09:33:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:11.239 09:33:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:11.239 09:33:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:11.239 09:33:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.239 09:33:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.239 09:33:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.239 ************************************ 00:06:11.239 START TEST rpc_daemon_integrity 00:06:11.239 ************************************ 00:06:11.239 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:11.239 09:33:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:11.239 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.239 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.239 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.239 09:33:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:11.240 09:33:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:11.507 { 00:06:11.507 "name": "Malloc2", 00:06:11.507 "aliases": [ 00:06:11.507 "de31c5a4-8810-4ca8-8408-8f0d8c324ef2" 00:06:11.507 ], 00:06:11.507 "product_name": "Malloc disk", 00:06:11.507 "block_size": 512, 00:06:11.507 "num_blocks": 16384, 00:06:11.507 "uuid": "de31c5a4-8810-4ca8-8408-8f0d8c324ef2", 00:06:11.507 "assigned_rate_limits": { 00:06:11.507 "rw_ios_per_sec": 0, 00:06:11.507 "rw_mbytes_per_sec": 0, 00:06:11.507 "r_mbytes_per_sec": 0, 00:06:11.507 "w_mbytes_per_sec": 0 00:06:11.507 }, 00:06:11.507 "claimed": false, 00:06:11.507 "zoned": false, 00:06:11.507 "supported_io_types": { 00:06:11.507 "read": true, 00:06:11.507 "write": true, 00:06:11.507 "unmap": true, 00:06:11.507 "flush": true, 00:06:11.507 "reset": true, 00:06:11.507 "nvme_admin": false, 00:06:11.507 "nvme_io": false, 00:06:11.507 "nvme_io_md": false, 00:06:11.507 "write_zeroes": true, 00:06:11.507 "zcopy": true, 00:06:11.507 "get_zone_info": false, 00:06:11.507 "zone_management": false, 00:06:11.507 "zone_append": false, 00:06:11.507 "compare": false, 00:06:11.507 "compare_and_write": false, 00:06:11.507 "abort": true, 00:06:11.507 "seek_hole": false, 00:06:11.507 "seek_data": false, 00:06:11.507 "copy": true, 00:06:11.507 "nvme_iov_md": false 00:06:11.507 }, 00:06:11.507 "memory_domains": [ 00:06:11.507 { 00:06:11.507 "dma_device_id": "system", 00:06:11.507 "dma_device_type": 1 00:06:11.507 }, 00:06:11.507 { 00:06:11.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.507 "dma_device_type": 2 00:06:11.507 } 00:06:11.507 ], 00:06:11.507 "driver_specific": {} 00:06:11.507 } 00:06:11.507 ]' 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.507 [2024-11-19 09:33:58.941616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:11.507 [2024-11-19 09:33:58.941682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:11.507 [2024-11-19 09:33:58.941704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd38790 00:06:11.507 [2024-11-19 09:33:58.941713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:11.507 [2024-11-19 09:33:58.943610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:11.507 [2024-11-19 09:33:58.943878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:11.507 Passthru0 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:11.507 { 00:06:11.507 "name": "Malloc2", 00:06:11.507 "aliases": [ 00:06:11.507 "de31c5a4-8810-4ca8-8408-8f0d8c324ef2" 00:06:11.507 ], 00:06:11.507 "product_name": "Malloc disk", 00:06:11.507 "block_size": 512, 00:06:11.507 "num_blocks": 16384, 00:06:11.507 "uuid": "de31c5a4-8810-4ca8-8408-8f0d8c324ef2", 00:06:11.507 "assigned_rate_limits": { 00:06:11.507 "rw_ios_per_sec": 0, 00:06:11.507 "rw_mbytes_per_sec": 0, 00:06:11.507 "r_mbytes_per_sec": 0, 00:06:11.507 "w_mbytes_per_sec": 0 00:06:11.507 }, 00:06:11.507 "claimed": true, 00:06:11.507 "claim_type": "exclusive_write", 00:06:11.507 "zoned": false, 00:06:11.507 "supported_io_types": { 00:06:11.507 "read": true, 00:06:11.507 "write": true, 00:06:11.507 "unmap": true, 00:06:11.507 "flush": true, 00:06:11.507 "reset": true, 00:06:11.507 "nvme_admin": false, 00:06:11.507 "nvme_io": false, 00:06:11.507 "nvme_io_md": false, 00:06:11.507 "write_zeroes": true, 00:06:11.507 "zcopy": true, 00:06:11.507 "get_zone_info": false, 00:06:11.507 "zone_management": false, 00:06:11.507 "zone_append": false, 00:06:11.507 "compare": false, 00:06:11.507 "compare_and_write": false, 00:06:11.507 "abort": true, 00:06:11.507 "seek_hole": false, 00:06:11.507 "seek_data": false, 00:06:11.507 "copy": true, 00:06:11.507 "nvme_iov_md": false 00:06:11.507 }, 00:06:11.507 "memory_domains": [ 00:06:11.507 { 00:06:11.507 "dma_device_id": "system", 00:06:11.507 "dma_device_type": 1 00:06:11.507 }, 00:06:11.507 { 00:06:11.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.507 "dma_device_type": 2 00:06:11.507 } 00:06:11.507 ], 00:06:11.507 "driver_specific": {} 00:06:11.507 }, 00:06:11.507 { 00:06:11.507 "name": "Passthru0", 00:06:11.507 "aliases": [ 00:06:11.507 "41f33f4c-c7b6-5308-ab4f-031a212d55fe" 00:06:11.507 ], 00:06:11.507 "product_name": "passthru", 00:06:11.507 "block_size": 512, 00:06:11.507 "num_blocks": 16384, 00:06:11.507 "uuid": "41f33f4c-c7b6-5308-ab4f-031a212d55fe", 00:06:11.507 "assigned_rate_limits": { 00:06:11.507 "rw_ios_per_sec": 0, 00:06:11.507 "rw_mbytes_per_sec": 0, 00:06:11.507 "r_mbytes_per_sec": 0, 00:06:11.507 "w_mbytes_per_sec": 0 00:06:11.507 }, 00:06:11.507 "claimed": false, 00:06:11.507 "zoned": false, 00:06:11.507 "supported_io_types": { 00:06:11.507 "read": true, 00:06:11.507 "write": true, 00:06:11.507 "unmap": true, 00:06:11.507 "flush": true, 00:06:11.507 "reset": true, 00:06:11.507 "nvme_admin": false, 00:06:11.507 "nvme_io": false, 00:06:11.507 "nvme_io_md": false, 00:06:11.507 "write_zeroes": true, 00:06:11.507 "zcopy": true, 00:06:11.507 "get_zone_info": false, 00:06:11.507 "zone_management": false, 00:06:11.507 "zone_append": false, 00:06:11.507 "compare": false, 00:06:11.507 "compare_and_write": false, 00:06:11.507 "abort": true, 00:06:11.507 "seek_hole": false, 00:06:11.507 "seek_data": false, 00:06:11.507 "copy": true, 00:06:11.507 "nvme_iov_md": false 00:06:11.507 }, 00:06:11.507 "memory_domains": [ 00:06:11.507 { 00:06:11.507 "dma_device_id": "system", 00:06:11.507 "dma_device_type": 1 00:06:11.507 }, 00:06:11.507 { 00:06:11.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.507 "dma_device_type": 2 00:06:11.507 } 00:06:11.507 ], 00:06:11.507 "driver_specific": { 00:06:11.507 "passthru": { 00:06:11.507 "name": "Passthru0", 00:06:11.507 "base_bdev_name": "Malloc2" 00:06:11.507 } 00:06:11.507 } 00:06:11.507 } 00:06:11.507 ]' 00:06:11.507 09:33:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:11.507 09:33:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:11.507 09:33:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:11.507 09:33:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.507 09:33:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.507 09:33:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.507 09:33:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:11.507 09:33:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.507 09:33:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.508 09:33:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.508 09:33:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:11.508 09:33:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.508 09:33:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.508 09:33:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.508 09:33:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:11.508 09:33:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:11.508 ************************************ 00:06:11.508 END TEST rpc_daemon_integrity 00:06:11.508 ************************************ 00:06:11.508 09:33:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:11.508 00:06:11.508 real 0m0.313s 00:06:11.508 user 0m0.205s 00:06:11.508 sys 0m0.039s 00:06:11.508 09:33:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.508 09:33:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.775 09:33:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:11.775 09:33:59 rpc -- rpc/rpc.sh@84 -- # killprocess 56772 00:06:11.775 09:33:59 rpc -- common/autotest_common.sh@954 -- # '[' -z 56772 ']' 00:06:11.775 09:33:59 rpc -- common/autotest_common.sh@958 -- # kill -0 56772 00:06:11.775 09:33:59 rpc -- common/autotest_common.sh@959 -- # uname 00:06:11.775 09:33:59 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.775 09:33:59 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56772 00:06:11.775 09:33:59 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.775 killing process with pid 56772 00:06:11.775 09:33:59 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.775 09:33:59 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56772' 00:06:11.775 09:33:59 rpc -- common/autotest_common.sh@973 -- # kill 56772 00:06:11.775 09:33:59 rpc -- common/autotest_common.sh@978 -- # wait 56772 00:06:12.036 00:06:12.036 real 0m2.414s 00:06:12.036 user 0m3.103s 00:06:12.036 sys 0m0.585s 00:06:12.036 09:33:59 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.036 09:33:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.036 ************************************ 00:06:12.036 END TEST rpc 00:06:12.036 ************************************ 00:06:12.036 09:33:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:12.036 09:33:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.036 09:33:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.036 09:33:59 -- common/autotest_common.sh@10 -- # set +x 00:06:12.036 ************************************ 00:06:12.036 START TEST skip_rpc 00:06:12.036 ************************************ 00:06:12.036 09:33:59 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:12.036 * Looking for test storage... 00:06:12.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:12.036 09:33:59 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.036 09:33:59 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.036 09:33:59 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.295 09:33:59 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.295 09:33:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:12.295 09:33:59 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.295 09:33:59 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.295 --rc genhtml_branch_coverage=1 00:06:12.295 --rc genhtml_function_coverage=1 00:06:12.295 --rc genhtml_legend=1 00:06:12.295 --rc geninfo_all_blocks=1 00:06:12.295 --rc geninfo_unexecuted_blocks=1 00:06:12.295 00:06:12.295 ' 00:06:12.295 09:33:59 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.295 --rc genhtml_branch_coverage=1 00:06:12.295 --rc genhtml_function_coverage=1 00:06:12.295 --rc genhtml_legend=1 00:06:12.295 --rc geninfo_all_blocks=1 00:06:12.295 --rc geninfo_unexecuted_blocks=1 00:06:12.295 00:06:12.295 ' 00:06:12.295 09:33:59 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.295 --rc genhtml_branch_coverage=1 00:06:12.295 --rc genhtml_function_coverage=1 00:06:12.295 --rc genhtml_legend=1 00:06:12.295 --rc geninfo_all_blocks=1 00:06:12.295 --rc geninfo_unexecuted_blocks=1 00:06:12.295 00:06:12.295 ' 00:06:12.295 09:33:59 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.295 --rc genhtml_branch_coverage=1 00:06:12.295 --rc genhtml_function_coverage=1 00:06:12.295 --rc genhtml_legend=1 00:06:12.295 --rc geninfo_all_blocks=1 00:06:12.295 --rc geninfo_unexecuted_blocks=1 00:06:12.295 00:06:12.295 ' 00:06:12.295 09:33:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:12.295 09:33:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:12.295 09:33:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:12.295 09:33:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.295 09:33:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.295 09:33:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.295 ************************************ 00:06:12.295 START TEST skip_rpc 00:06:12.295 ************************************ 00:06:12.295 09:33:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:12.295 09:33:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56965 00:06:12.295 09:33:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:12.295 09:33:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.295 09:33:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:12.295 [2024-11-19 09:33:59.861906] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:12.295 [2024-11-19 09:33:59.862773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56965 ] 00:06:12.555 [2024-11-19 09:34:00.011503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.555 [2024-11-19 09:34:00.096074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.813 [2024-11-19 09:34:00.192081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56965 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56965 ']' 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56965 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56965 00:06:18.144 killing process with pid 56965 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56965' 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56965 00:06:18.144 09:34:04 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56965 00:06:18.144 ************************************ 00:06:18.145 END TEST skip_rpc 00:06:18.145 ************************************ 00:06:18.145 00:06:18.145 real 0m5.429s 00:06:18.145 user 0m4.992s 00:06:18.145 sys 0m0.318s 00:06:18.145 09:34:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.145 09:34:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.145 09:34:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:18.145 09:34:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.145 09:34:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.145 09:34:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.145 ************************************ 00:06:18.145 START TEST skip_rpc_with_json 00:06:18.145 ************************************ 00:06:18.145 09:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:18.145 09:34:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:18.145 09:34:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57051 00:06:18.145 09:34:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.145 09:34:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.145 09:34:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57051 00:06:18.145 09:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57051 ']' 00:06:18.145 09:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.145 09:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.145 09:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.145 09:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.145 09:34:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.145 [2024-11-19 09:34:05.349571] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:18.145 [2024-11-19 09:34:05.351610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57051 ] 00:06:18.145 [2024-11-19 09:34:05.505046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.145 [2024-11-19 09:34:05.574428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.145 [2024-11-19 09:34:05.653087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.712 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.712 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:18.712 09:34:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:18.712 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.712 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.712 [2024-11-19 09:34:06.294668] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:18.712 request: 00:06:18.712 { 00:06:18.712 "trtype": "tcp", 00:06:18.712 "method": "nvmf_get_transports", 00:06:18.712 "req_id": 1 00:06:18.712 } 00:06:18.712 Got JSON-RPC error response 00:06:18.712 response: 00:06:18.712 { 00:06:18.712 "code": -19, 00:06:18.712 "message": "No such device" 00:06:18.712 } 00:06:18.712 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:18.712 09:34:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:18.712 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.712 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.712 [2024-11-19 09:34:06.306806] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.712 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.712 09:34:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:18.712 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.712 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.971 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.971 09:34:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:18.971 { 00:06:18.971 "subsystems": [ 00:06:18.971 { 00:06:18.971 "subsystem": "fsdev", 00:06:18.971 "config": [ 00:06:18.971 { 00:06:18.971 "method": "fsdev_set_opts", 00:06:18.971 "params": { 00:06:18.971 "fsdev_io_pool_size": 65535, 00:06:18.971 "fsdev_io_cache_size": 256 00:06:18.971 } 00:06:18.971 } 00:06:18.971 ] 00:06:18.971 }, 00:06:18.971 { 00:06:18.971 "subsystem": "keyring", 00:06:18.971 "config": [] 00:06:18.971 }, 00:06:18.971 { 00:06:18.971 "subsystem": "iobuf", 00:06:18.971 "config": [ 00:06:18.971 { 00:06:18.971 "method": "iobuf_set_options", 00:06:18.971 "params": { 00:06:18.971 "small_pool_count": 8192, 00:06:18.971 "large_pool_count": 1024, 00:06:18.971 "small_bufsize": 8192, 00:06:18.971 "large_bufsize": 135168, 00:06:18.972 "enable_numa": false 00:06:18.972 } 00:06:18.972 } 00:06:18.972 ] 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "subsystem": "sock", 00:06:18.972 "config": [ 00:06:18.972 { 00:06:18.972 "method": "sock_set_default_impl", 00:06:18.972 "params": { 00:06:18.972 "impl_name": "uring" 00:06:18.972 } 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "method": "sock_impl_set_options", 00:06:18.972 "params": { 00:06:18.972 "impl_name": "ssl", 00:06:18.972 "recv_buf_size": 4096, 00:06:18.972 "send_buf_size": 4096, 00:06:18.972 "enable_recv_pipe": true, 00:06:18.972 "enable_quickack": false, 00:06:18.972 "enable_placement_id": 0, 00:06:18.972 "enable_zerocopy_send_server": true, 00:06:18.972 "enable_zerocopy_send_client": false, 00:06:18.972 "zerocopy_threshold": 0, 00:06:18.972 "tls_version": 0, 00:06:18.972 "enable_ktls": false 00:06:18.972 } 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "method": "sock_impl_set_options", 00:06:18.972 "params": { 00:06:18.972 "impl_name": "posix", 00:06:18.972 "recv_buf_size": 2097152, 00:06:18.972 "send_buf_size": 2097152, 00:06:18.972 "enable_recv_pipe": true, 00:06:18.972 "enable_quickack": false, 00:06:18.972 "enable_placement_id": 0, 00:06:18.972 "enable_zerocopy_send_server": true, 00:06:18.972 "enable_zerocopy_send_client": false, 00:06:18.972 "zerocopy_threshold": 0, 00:06:18.972 "tls_version": 0, 00:06:18.972 "enable_ktls": false 00:06:18.972 } 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "method": "sock_impl_set_options", 00:06:18.972 "params": { 00:06:18.972 "impl_name": "uring", 00:06:18.972 "recv_buf_size": 2097152, 00:06:18.972 "send_buf_size": 2097152, 00:06:18.972 "enable_recv_pipe": true, 00:06:18.972 "enable_quickack": false, 00:06:18.972 "enable_placement_id": 0, 00:06:18.972 "enable_zerocopy_send_server": false, 00:06:18.972 "enable_zerocopy_send_client": false, 00:06:18.972 "zerocopy_threshold": 0, 00:06:18.972 "tls_version": 0, 00:06:18.972 "enable_ktls": false 00:06:18.972 } 00:06:18.972 } 00:06:18.972 ] 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "subsystem": "vmd", 00:06:18.972 "config": [] 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "subsystem": "accel", 00:06:18.972 "config": [ 00:06:18.972 { 00:06:18.972 "method": "accel_set_options", 00:06:18.972 "params": { 00:06:18.972 "small_cache_size": 128, 00:06:18.972 "large_cache_size": 16, 00:06:18.972 "task_count": 2048, 00:06:18.972 "sequence_count": 2048, 00:06:18.972 "buf_count": 2048 00:06:18.972 } 00:06:18.972 } 00:06:18.972 ] 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "subsystem": "bdev", 00:06:18.972 "config": [ 00:06:18.972 { 00:06:18.972 "method": "bdev_set_options", 00:06:18.972 "params": { 00:06:18.972 "bdev_io_pool_size": 65535, 00:06:18.972 "bdev_io_cache_size": 256, 00:06:18.972 "bdev_auto_examine": true, 00:06:18.972 "iobuf_small_cache_size": 128, 00:06:18.972 "iobuf_large_cache_size": 16 00:06:18.972 } 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "method": "bdev_raid_set_options", 00:06:18.972 "params": { 00:06:18.972 "process_window_size_kb": 1024, 00:06:18.972 "process_max_bandwidth_mb_sec": 0 00:06:18.972 } 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "method": "bdev_iscsi_set_options", 00:06:18.972 "params": { 00:06:18.972 "timeout_sec": 30 00:06:18.972 } 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "method": "bdev_nvme_set_options", 00:06:18.972 "params": { 00:06:18.972 "action_on_timeout": "none", 00:06:18.972 "timeout_us": 0, 00:06:18.972 "timeout_admin_us": 0, 00:06:18.972 "keep_alive_timeout_ms": 10000, 00:06:18.972 "arbitration_burst": 0, 00:06:18.972 "low_priority_weight": 0, 00:06:18.972 "medium_priority_weight": 0, 00:06:18.972 "high_priority_weight": 0, 00:06:18.972 "nvme_adminq_poll_period_us": 10000, 00:06:18.972 "nvme_ioq_poll_period_us": 0, 00:06:18.972 "io_queue_requests": 0, 00:06:18.972 "delay_cmd_submit": true, 00:06:18.972 "transport_retry_count": 4, 00:06:18.972 "bdev_retry_count": 3, 00:06:18.972 "transport_ack_timeout": 0, 00:06:18.972 "ctrlr_loss_timeout_sec": 0, 00:06:18.972 "reconnect_delay_sec": 0, 00:06:18.972 "fast_io_fail_timeout_sec": 0, 00:06:18.972 "disable_auto_failback": false, 00:06:18.972 "generate_uuids": false, 00:06:18.972 "transport_tos": 0, 00:06:18.972 "nvme_error_stat": false, 00:06:18.972 "rdma_srq_size": 0, 00:06:18.972 "io_path_stat": false, 00:06:18.972 "allow_accel_sequence": false, 00:06:18.972 "rdma_max_cq_size": 0, 00:06:18.972 "rdma_cm_event_timeout_ms": 0, 00:06:18.972 "dhchap_digests": [ 00:06:18.972 "sha256", 00:06:18.972 "sha384", 00:06:18.972 "sha512" 00:06:18.972 ], 00:06:18.972 "dhchap_dhgroups": [ 00:06:18.972 "null", 00:06:18.972 "ffdhe2048", 00:06:18.972 "ffdhe3072", 00:06:18.972 "ffdhe4096", 00:06:18.972 "ffdhe6144", 00:06:18.972 "ffdhe8192" 00:06:18.972 ] 00:06:18.972 } 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "method": "bdev_nvme_set_hotplug", 00:06:18.972 "params": { 00:06:18.972 "period_us": 100000, 00:06:18.972 "enable": false 00:06:18.972 } 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "method": "bdev_wait_for_examine" 00:06:18.972 } 00:06:18.972 ] 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "subsystem": "scsi", 00:06:18.972 "config": null 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "subsystem": "scheduler", 00:06:18.972 "config": [ 00:06:18.972 { 00:06:18.972 "method": "framework_set_scheduler", 00:06:18.972 "params": { 00:06:18.972 "name": "static" 00:06:18.972 } 00:06:18.972 } 00:06:18.972 ] 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "subsystem": "vhost_scsi", 00:06:18.972 "config": [] 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "subsystem": "vhost_blk", 00:06:18.972 "config": [] 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "subsystem": "ublk", 00:06:18.972 "config": [] 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "subsystem": "nbd", 00:06:18.972 "config": [] 00:06:18.972 }, 00:06:18.972 { 00:06:18.972 "subsystem": "nvmf", 00:06:18.972 "config": [ 00:06:18.972 { 00:06:18.972 "method": "nvmf_set_config", 00:06:18.973 "params": { 00:06:18.973 "discovery_filter": "match_any", 00:06:18.973 "admin_cmd_passthru": { 00:06:18.973 "identify_ctrlr": false 00:06:18.973 }, 00:06:18.973 "dhchap_digests": [ 00:06:18.973 "sha256", 00:06:18.973 "sha384", 00:06:18.973 "sha512" 00:06:18.973 ], 00:06:18.973 "dhchap_dhgroups": [ 00:06:18.973 "null", 00:06:18.973 "ffdhe2048", 00:06:18.973 "ffdhe3072", 00:06:18.973 "ffdhe4096", 00:06:18.973 "ffdhe6144", 00:06:18.973 "ffdhe8192" 00:06:18.973 ] 00:06:18.973 } 00:06:18.973 }, 00:06:18.973 { 00:06:18.973 "method": "nvmf_set_max_subsystems", 00:06:18.973 "params": { 00:06:18.973 "max_subsystems": 1024 00:06:18.973 } 00:06:18.973 }, 00:06:18.973 { 00:06:18.973 "method": "nvmf_set_crdt", 00:06:18.973 "params": { 00:06:18.973 "crdt1": 0, 00:06:18.973 "crdt2": 0, 00:06:18.973 "crdt3": 0 00:06:18.973 } 00:06:18.973 }, 00:06:18.973 { 00:06:18.973 "method": "nvmf_create_transport", 00:06:18.973 "params": { 00:06:18.973 "trtype": "TCP", 00:06:18.973 "max_queue_depth": 128, 00:06:18.973 "max_io_qpairs_per_ctrlr": 127, 00:06:18.973 "in_capsule_data_size": 4096, 00:06:18.973 "max_io_size": 131072, 00:06:18.973 "io_unit_size": 131072, 00:06:18.973 "max_aq_depth": 128, 00:06:18.973 "num_shared_buffers": 511, 00:06:18.973 "buf_cache_size": 4294967295, 00:06:18.973 "dif_insert_or_strip": false, 00:06:18.973 "zcopy": false, 00:06:18.973 "c2h_success": true, 00:06:18.973 "sock_priority": 0, 00:06:18.973 "abort_timeout_sec": 1, 00:06:18.973 "ack_timeout": 0, 00:06:18.973 "data_wr_pool_size": 0 00:06:18.973 } 00:06:18.973 } 00:06:18.973 ] 00:06:18.973 }, 00:06:18.973 { 00:06:18.973 "subsystem": "iscsi", 00:06:18.973 "config": [ 00:06:18.973 { 00:06:18.973 "method": "iscsi_set_options", 00:06:18.973 "params": { 00:06:18.973 "node_base": "iqn.2016-06.io.spdk", 00:06:18.973 "max_sessions": 128, 00:06:18.973 "max_connections_per_session": 2, 00:06:18.973 "max_queue_depth": 64, 00:06:18.973 "default_time2wait": 2, 00:06:18.973 "default_time2retain": 20, 00:06:18.973 "first_burst_length": 8192, 00:06:18.973 "immediate_data": true, 00:06:18.973 "allow_duplicated_isid": false, 00:06:18.973 "error_recovery_level": 0, 00:06:18.973 "nop_timeout": 60, 00:06:18.973 "nop_in_interval": 30, 00:06:18.973 "disable_chap": false, 00:06:18.973 "require_chap": false, 00:06:18.973 "mutual_chap": false, 00:06:18.973 "chap_group": 0, 00:06:18.973 "max_large_datain_per_connection": 64, 00:06:18.973 "max_r2t_per_connection": 4, 00:06:18.973 "pdu_pool_size": 36864, 00:06:18.973 "immediate_data_pool_size": 16384, 00:06:18.973 "data_out_pool_size": 2048 00:06:18.973 } 00:06:18.973 } 00:06:18.973 ] 00:06:18.973 } 00:06:18.973 ] 00:06:18.973 } 00:06:18.973 09:34:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:18.973 09:34:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57051 00:06:18.973 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57051 ']' 00:06:18.973 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57051 00:06:18.973 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:18.973 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.973 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57051 00:06:18.973 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.973 killing process with pid 57051 00:06:18.973 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.973 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57051' 00:06:18.973 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57051 00:06:18.973 09:34:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57051 00:06:19.541 09:34:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57079 00:06:19.541 09:34:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:19.541 09:34:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:24.812 09:34:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57079 00:06:24.812 09:34:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57079 ']' 00:06:24.812 09:34:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57079 00:06:24.812 09:34:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:24.812 09:34:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.812 09:34:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57079 00:06:24.812 09:34:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.812 09:34:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.812 09:34:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57079' 00:06:24.812 killing process with pid 57079 00:06:24.812 09:34:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57079 00:06:24.812 09:34:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57079 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:24.812 00:06:24.812 real 0m7.097s 00:06:24.812 user 0m6.786s 00:06:24.812 sys 0m0.687s 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:24.812 ************************************ 00:06:24.812 END TEST skip_rpc_with_json 00:06:24.812 ************************************ 00:06:24.812 09:34:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:24.812 09:34:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.812 09:34:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.812 09:34:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.812 ************************************ 00:06:24.812 START TEST skip_rpc_with_delay 00:06:24.812 ************************************ 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:24.812 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:25.072 [2024-11-19 09:34:12.474733] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:25.072 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:25.072 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.072 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.072 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.072 00:06:25.072 real 0m0.117s 00:06:25.072 user 0m0.077s 00:06:25.072 sys 0m0.037s 00:06:25.072 ************************************ 00:06:25.072 END TEST skip_rpc_with_delay 00:06:25.072 ************************************ 00:06:25.072 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.072 09:34:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:25.072 09:34:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:25.072 09:34:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:25.072 09:34:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:25.072 09:34:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.072 09:34:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.072 09:34:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.072 ************************************ 00:06:25.072 START TEST exit_on_failed_rpc_init 00:06:25.072 ************************************ 00:06:25.072 09:34:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:25.072 09:34:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57194 00:06:25.072 09:34:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.072 09:34:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57194 00:06:25.072 09:34:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57194 ']' 00:06:25.072 09:34:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.072 09:34:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.072 09:34:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.072 09:34:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.072 09:34:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:25.072 [2024-11-19 09:34:12.617621] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:25.072 [2024-11-19 09:34:12.617745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57194 ] 00:06:25.330 [2024-11-19 09:34:12.763437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.330 [2024-11-19 09:34:12.826226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.330 [2024-11-19 09:34:12.901344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:26.266 09:34:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.266 [2024-11-19 09:34:13.790413] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:26.266 [2024-11-19 09:34:13.790556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57212 ] 00:06:26.524 [2024-11-19 09:34:13.940821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.524 [2024-11-19 09:34:14.018113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.524 [2024-11-19 09:34:14.018462] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:26.524 [2024-11-19 09:34:14.018495] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:26.524 [2024-11-19 09:34:14.018506] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57194 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57194 ']' 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57194 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57194 00:06:26.524 killing process with pid 57194 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57194' 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57194 00:06:26.524 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57194 00:06:27.091 00:06:27.091 real 0m1.954s 00:06:27.091 user 0m2.344s 00:06:27.091 sys 0m0.425s 00:06:27.091 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.091 ************************************ 00:06:27.091 END TEST exit_on_failed_rpc_init 00:06:27.091 ************************************ 00:06:27.091 09:34:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:27.091 09:34:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:27.091 ************************************ 00:06:27.091 END TEST skip_rpc 00:06:27.091 ************************************ 00:06:27.091 00:06:27.091 real 0m14.957s 00:06:27.091 user 0m14.391s 00:06:27.091 sys 0m1.630s 00:06:27.091 09:34:14 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.091 09:34:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.091 09:34:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:27.091 09:34:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.091 09:34:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.091 09:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:27.091 ************************************ 00:06:27.091 START TEST rpc_client 00:06:27.091 ************************************ 00:06:27.091 09:34:14 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:27.091 * Looking for test storage... 00:06:27.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:27.091 09:34:14 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.091 09:34:14 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.091 09:34:14 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.350 09:34:14 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.350 09:34:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:27.350 09:34:14 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.350 09:34:14 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.350 --rc genhtml_branch_coverage=1 00:06:27.350 --rc genhtml_function_coverage=1 00:06:27.350 --rc genhtml_legend=1 00:06:27.350 --rc geninfo_all_blocks=1 00:06:27.350 --rc geninfo_unexecuted_blocks=1 00:06:27.350 00:06:27.350 ' 00:06:27.350 09:34:14 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.350 --rc genhtml_branch_coverage=1 00:06:27.350 --rc genhtml_function_coverage=1 00:06:27.350 --rc genhtml_legend=1 00:06:27.350 --rc geninfo_all_blocks=1 00:06:27.350 --rc geninfo_unexecuted_blocks=1 00:06:27.350 00:06:27.350 ' 00:06:27.350 09:34:14 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.350 --rc genhtml_branch_coverage=1 00:06:27.350 --rc genhtml_function_coverage=1 00:06:27.350 --rc genhtml_legend=1 00:06:27.350 --rc geninfo_all_blocks=1 00:06:27.350 --rc geninfo_unexecuted_blocks=1 00:06:27.350 00:06:27.350 ' 00:06:27.350 09:34:14 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.350 --rc genhtml_branch_coverage=1 00:06:27.350 --rc genhtml_function_coverage=1 00:06:27.350 --rc genhtml_legend=1 00:06:27.350 --rc geninfo_all_blocks=1 00:06:27.350 --rc geninfo_unexecuted_blocks=1 00:06:27.350 00:06:27.350 ' 00:06:27.350 09:34:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:27.350 OK 00:06:27.350 09:34:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:27.350 00:06:27.350 real 0m0.183s 00:06:27.350 user 0m0.112s 00:06:27.350 sys 0m0.082s 00:06:27.350 09:34:14 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.350 09:34:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:27.350 ************************************ 00:06:27.350 END TEST rpc_client 00:06:27.350 ************************************ 00:06:27.350 09:34:14 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:27.350 09:34:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.350 09:34:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.350 09:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:27.350 ************************************ 00:06:27.350 START TEST json_config 00:06:27.350 ************************************ 00:06:27.350 09:34:14 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:27.350 09:34:14 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.350 09:34:14 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.350 09:34:14 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.610 09:34:14 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.610 09:34:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.610 09:34:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.610 09:34:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.610 09:34:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.610 09:34:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.610 09:34:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.610 09:34:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.610 09:34:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.610 09:34:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.610 09:34:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.610 09:34:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.610 09:34:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:27.610 09:34:14 json_config -- scripts/common.sh@345 -- # : 1 00:06:27.610 09:34:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.610 09:34:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.610 09:34:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:27.610 09:34:14 json_config -- scripts/common.sh@353 -- # local d=1 00:06:27.610 09:34:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.610 09:34:14 json_config -- scripts/common.sh@355 -- # echo 1 00:06:27.610 09:34:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.610 09:34:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:27.610 09:34:14 json_config -- scripts/common.sh@353 -- # local d=2 00:06:27.610 09:34:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.610 09:34:14 json_config -- scripts/common.sh@355 -- # echo 2 00:06:27.610 09:34:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.610 09:34:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.610 09:34:15 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.610 09:34:15 json_config -- scripts/common.sh@368 -- # return 0 00:06:27.610 09:34:15 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.610 09:34:15 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.610 --rc genhtml_branch_coverage=1 00:06:27.610 --rc genhtml_function_coverage=1 00:06:27.610 --rc genhtml_legend=1 00:06:27.610 --rc geninfo_all_blocks=1 00:06:27.610 --rc geninfo_unexecuted_blocks=1 00:06:27.610 00:06:27.610 ' 00:06:27.610 09:34:15 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.610 --rc genhtml_branch_coverage=1 00:06:27.610 --rc genhtml_function_coverage=1 00:06:27.610 --rc genhtml_legend=1 00:06:27.610 --rc geninfo_all_blocks=1 00:06:27.610 --rc geninfo_unexecuted_blocks=1 00:06:27.610 00:06:27.610 ' 00:06:27.610 09:34:15 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.610 --rc genhtml_branch_coverage=1 00:06:27.610 --rc genhtml_function_coverage=1 00:06:27.610 --rc genhtml_legend=1 00:06:27.610 --rc geninfo_all_blocks=1 00:06:27.610 --rc geninfo_unexecuted_blocks=1 00:06:27.610 00:06:27.610 ' 00:06:27.610 09:34:15 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.610 --rc genhtml_branch_coverage=1 00:06:27.610 --rc genhtml_function_coverage=1 00:06:27.610 --rc genhtml_legend=1 00:06:27.610 --rc geninfo_all_blocks=1 00:06:27.610 --rc geninfo_unexecuted_blocks=1 00:06:27.610 00:06:27.610 ' 00:06:27.610 09:34:15 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.610 09:34:15 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.610 09:34:15 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.610 09:34:15 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.610 09:34:15 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.610 09:34:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.610 09:34:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.610 09:34:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.610 09:34:15 json_config -- paths/export.sh@5 -- # export PATH 00:06:27.610 09:34:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@51 -- # : 0 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:27.610 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:27.610 09:34:15 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:27.610 09:34:15 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:27.610 09:34:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:27.610 09:34:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:27.610 09:34:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:27.610 09:34:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:27.611 INFO: JSON configuration test init 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:27.611 09:34:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.611 09:34:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:27.611 09:34:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.611 09:34:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.611 Waiting for target to run... 00:06:27.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:27.611 09:34:15 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:27.611 09:34:15 json_config -- json_config/common.sh@9 -- # local app=target 00:06:27.611 09:34:15 json_config -- json_config/common.sh@10 -- # shift 00:06:27.611 09:34:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:27.611 09:34:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:27.611 09:34:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:27.611 09:34:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.611 09:34:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.611 09:34:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57346 00:06:27.611 09:34:15 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:27.611 09:34:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:27.611 09:34:15 json_config -- json_config/common.sh@25 -- # waitforlisten 57346 /var/tmp/spdk_tgt.sock 00:06:27.611 09:34:15 json_config -- common/autotest_common.sh@835 -- # '[' -z 57346 ']' 00:06:27.611 09:34:15 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:27.611 09:34:15 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.611 09:34:15 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:27.611 09:34:15 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.611 09:34:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.611 [2024-11-19 09:34:15.124631] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:27.611 [2024-11-19 09:34:15.124966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57346 ] 00:06:28.230 [2024-11-19 09:34:15.545938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.230 [2024-11-19 09:34:15.610832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.488 09:34:16 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.488 09:34:16 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:28.488 09:34:16 json_config -- json_config/common.sh@26 -- # echo '' 00:06:28.488 00:06:28.488 09:34:16 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:28.488 09:34:16 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:28.488 09:34:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:28.488 09:34:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.488 09:34:16 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:28.488 09:34:16 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:28.488 09:34:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:28.488 09:34:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.746 09:34:16 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:28.746 09:34:16 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:28.746 09:34:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:29.004 [2024-11-19 09:34:16.451932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.261 09:34:16 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:29.261 09:34:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:29.261 09:34:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:29.262 09:34:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.262 09:34:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:29.262 09:34:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:29.262 09:34:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:29.262 09:34:16 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:29.262 09:34:16 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:29.262 09:34:16 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:29.262 09:34:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:29.262 09:34:16 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:29.520 09:34:16 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:29.520 09:34:16 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:29.520 09:34:16 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:29.520 09:34:16 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:29.520 09:34:16 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:29.520 09:34:16 json_config -- json_config/json_config.sh@54 -- # sort 00:06:29.520 09:34:16 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:29.520 09:34:16 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:29.520 09:34:16 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:29.520 09:34:16 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:29.520 09:34:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.520 09:34:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.520 09:34:17 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:29.520 09:34:17 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:29.520 09:34:17 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:29.520 09:34:17 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:29.520 09:34:17 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:29.520 09:34:17 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:29.520 09:34:17 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:29.520 09:34:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:29.520 09:34:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.520 09:34:17 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:29.520 09:34:17 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:29.520 09:34:17 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:29.520 09:34:17 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:29.520 09:34:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:29.779 MallocForNvmf0 00:06:29.779 09:34:17 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:29.779 09:34:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:30.038 MallocForNvmf1 00:06:30.038 09:34:17 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:30.038 09:34:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:30.604 [2024-11-19 09:34:17.941099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.604 09:34:17 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.604 09:34:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.863 09:34:18 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:30.863 09:34:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:31.121 09:34:18 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:31.121 09:34:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:31.380 09:34:18 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:31.380 09:34:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:31.639 [2024-11-19 09:34:19.117724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:31.639 09:34:19 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:31.639 09:34:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.639 09:34:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.639 09:34:19 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:31.639 09:34:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.639 09:34:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.639 09:34:19 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:31.639 09:34:19 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:31.639 09:34:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:31.898 MallocBdevForConfigChangeCheck 00:06:31.898 09:34:19 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:31.898 09:34:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.898 09:34:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.157 09:34:19 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:32.157 09:34:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:32.415 INFO: shutting down applications... 00:06:32.415 09:34:19 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:32.415 09:34:19 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:32.415 09:34:19 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:32.415 09:34:19 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:32.415 09:34:19 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:32.983 Calling clear_iscsi_subsystem 00:06:32.983 Calling clear_nvmf_subsystem 00:06:32.983 Calling clear_nbd_subsystem 00:06:32.983 Calling clear_ublk_subsystem 00:06:32.983 Calling clear_vhost_blk_subsystem 00:06:32.983 Calling clear_vhost_scsi_subsystem 00:06:32.983 Calling clear_bdev_subsystem 00:06:32.983 09:34:20 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:32.983 09:34:20 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:32.983 09:34:20 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:32.983 09:34:20 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:32.983 09:34:20 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:32.983 09:34:20 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:33.242 09:34:20 json_config -- json_config/json_config.sh@352 -- # break 00:06:33.242 09:34:20 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:33.242 09:34:20 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:33.242 09:34:20 json_config -- json_config/common.sh@31 -- # local app=target 00:06:33.242 09:34:20 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:33.242 09:34:20 json_config -- json_config/common.sh@35 -- # [[ -n 57346 ]] 00:06:33.242 09:34:20 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57346 00:06:33.242 09:34:20 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:33.242 09:34:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.242 09:34:20 json_config -- json_config/common.sh@41 -- # kill -0 57346 00:06:33.242 09:34:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:33.810 09:34:21 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:33.810 09:34:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.810 09:34:21 json_config -- json_config/common.sh@41 -- # kill -0 57346 00:06:33.810 09:34:21 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:33.810 09:34:21 json_config -- json_config/common.sh@43 -- # break 00:06:33.810 SPDK target shutdown done 00:06:33.810 INFO: relaunching applications... 00:06:33.810 09:34:21 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:33.810 09:34:21 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:33.810 09:34:21 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:33.810 09:34:21 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:33.810 09:34:21 json_config -- json_config/common.sh@9 -- # local app=target 00:06:33.810 09:34:21 json_config -- json_config/common.sh@10 -- # shift 00:06:33.810 09:34:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:33.810 09:34:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:33.810 09:34:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:33.810 09:34:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.810 09:34:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.810 Waiting for target to run... 00:06:33.810 09:34:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57549 00:06:33.810 09:34:21 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:33.810 09:34:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:33.810 09:34:21 json_config -- json_config/common.sh@25 -- # waitforlisten 57549 /var/tmp/spdk_tgt.sock 00:06:33.810 09:34:21 json_config -- common/autotest_common.sh@835 -- # '[' -z 57549 ']' 00:06:33.810 09:34:21 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:33.810 09:34:21 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:33.810 09:34:21 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:33.810 09:34:21 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.810 09:34:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.810 [2024-11-19 09:34:21.296624] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:33.811 [2024-11-19 09:34:21.296718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57549 ] 00:06:34.105 [2024-11-19 09:34:21.712519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.364 [2024-11-19 09:34:21.761057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.364 [2024-11-19 09:34:21.899253] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.623 [2024-11-19 09:34:22.118593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.623 [2024-11-19 09:34:22.150828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:34.882 00:06:34.882 INFO: Checking if target configuration is the same... 00:06:34.882 09:34:22 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.882 09:34:22 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:34.882 09:34:22 json_config -- json_config/common.sh@26 -- # echo '' 00:06:34.882 09:34:22 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:34.882 09:34:22 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:34.882 09:34:22 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:34.882 09:34:22 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:34.882 09:34:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.882 + '[' 2 -ne 2 ']' 00:06:34.882 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:34.882 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:34.882 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:34.882 +++ basename /dev/fd/62 00:06:34.882 ++ mktemp /tmp/62.XXX 00:06:34.882 + tmp_file_1=/tmp/62.BcI 00:06:34.882 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:34.882 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:34.882 + tmp_file_2=/tmp/spdk_tgt_config.json.6VO 00:06:34.882 + ret=0 00:06:34.882 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:35.140 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:35.140 + diff -u /tmp/62.BcI /tmp/spdk_tgt_config.json.6VO 00:06:35.140 INFO: JSON config files are the same 00:06:35.140 + echo 'INFO: JSON config files are the same' 00:06:35.140 + rm /tmp/62.BcI /tmp/spdk_tgt_config.json.6VO 00:06:35.140 + exit 0 00:06:35.140 INFO: changing configuration and checking if this can be detected... 00:06:35.140 09:34:22 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:35.140 09:34:22 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:35.140 09:34:22 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:35.140 09:34:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:35.707 09:34:23 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:35.707 09:34:23 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:35.707 09:34:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:35.707 + '[' 2 -ne 2 ']' 00:06:35.707 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:35.707 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:35.707 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:35.707 +++ basename /dev/fd/62 00:06:35.707 ++ mktemp /tmp/62.XXX 00:06:35.707 + tmp_file_1=/tmp/62.3Xz 00:06:35.707 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:35.707 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:35.707 + tmp_file_2=/tmp/spdk_tgt_config.json.ThG 00:06:35.707 + ret=0 00:06:35.707 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:35.967 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:35.967 + diff -u /tmp/62.3Xz /tmp/spdk_tgt_config.json.ThG 00:06:35.967 + ret=1 00:06:35.967 + echo '=== Start of file: /tmp/62.3Xz ===' 00:06:35.967 + cat /tmp/62.3Xz 00:06:35.967 + echo '=== End of file: /tmp/62.3Xz ===' 00:06:35.967 + echo '' 00:06:35.967 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ThG ===' 00:06:35.967 + cat /tmp/spdk_tgt_config.json.ThG 00:06:35.967 + echo '=== End of file: /tmp/spdk_tgt_config.json.ThG ===' 00:06:35.967 + echo '' 00:06:35.967 + rm /tmp/62.3Xz /tmp/spdk_tgt_config.json.ThG 00:06:35.967 + exit 1 00:06:35.967 INFO: configuration change detected. 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:35.967 09:34:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.967 09:34:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@324 -- # [[ -n 57549 ]] 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:35.967 09:34:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.967 09:34:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:35.967 09:34:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:35.967 09:34:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.967 09:34:23 json_config -- json_config/json_config.sh@330 -- # killprocess 57549 00:06:35.967 09:34:23 json_config -- common/autotest_common.sh@954 -- # '[' -z 57549 ']' 00:06:35.967 09:34:23 json_config -- common/autotest_common.sh@958 -- # kill -0 57549 00:06:35.967 09:34:23 json_config -- common/autotest_common.sh@959 -- # uname 00:06:35.967 09:34:23 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.967 09:34:23 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57549 00:06:36.226 killing process with pid 57549 00:06:36.226 09:34:23 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.226 09:34:23 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.226 09:34:23 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57549' 00:06:36.226 09:34:23 json_config -- common/autotest_common.sh@973 -- # kill 57549 00:06:36.226 09:34:23 json_config -- common/autotest_common.sh@978 -- # wait 57549 00:06:36.226 09:34:23 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:36.226 09:34:23 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:36.226 09:34:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.226 09:34:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.486 INFO: Success 00:06:36.486 09:34:23 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:36.486 09:34:23 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:36.486 00:06:36.486 real 0m9.061s 00:06:36.486 user 0m13.194s 00:06:36.486 sys 0m1.735s 00:06:36.486 09:34:23 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.486 ************************************ 00:06:36.486 END TEST json_config 00:06:36.486 ************************************ 00:06:36.486 09:34:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.486 09:34:23 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:36.486 09:34:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.486 09:34:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.486 09:34:23 -- common/autotest_common.sh@10 -- # set +x 00:06:36.486 ************************************ 00:06:36.486 START TEST json_config_extra_key 00:06:36.486 ************************************ 00:06:36.486 09:34:23 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:36.486 09:34:23 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.486 09:34:23 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.486 09:34:23 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.486 09:34:24 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:36.486 09:34:24 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.486 09:34:24 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.486 --rc genhtml_branch_coverage=1 00:06:36.486 --rc genhtml_function_coverage=1 00:06:36.486 --rc genhtml_legend=1 00:06:36.486 --rc geninfo_all_blocks=1 00:06:36.486 --rc geninfo_unexecuted_blocks=1 00:06:36.486 00:06:36.486 ' 00:06:36.486 09:34:24 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.486 --rc genhtml_branch_coverage=1 00:06:36.486 --rc genhtml_function_coverage=1 00:06:36.486 --rc genhtml_legend=1 00:06:36.486 --rc geninfo_all_blocks=1 00:06:36.486 --rc geninfo_unexecuted_blocks=1 00:06:36.486 00:06:36.486 ' 00:06:36.486 09:34:24 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.486 --rc genhtml_branch_coverage=1 00:06:36.486 --rc genhtml_function_coverage=1 00:06:36.486 --rc genhtml_legend=1 00:06:36.486 --rc geninfo_all_blocks=1 00:06:36.486 --rc geninfo_unexecuted_blocks=1 00:06:36.486 00:06:36.486 ' 00:06:36.486 09:34:24 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.486 --rc genhtml_branch_coverage=1 00:06:36.486 --rc genhtml_function_coverage=1 00:06:36.486 --rc genhtml_legend=1 00:06:36.486 --rc geninfo_all_blocks=1 00:06:36.486 --rc geninfo_unexecuted_blocks=1 00:06:36.486 00:06:36.486 ' 00:06:36.486 09:34:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.486 09:34:24 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.486 09:34:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.486 09:34:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.486 09:34:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.486 09:34:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:36.486 09:34:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:36.486 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:36.486 09:34:24 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:36.487 09:34:24 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:36.487 INFO: launching applications... 00:06:36.487 Waiting for target to run... 00:06:36.487 09:34:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:36.487 09:34:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:36.487 09:34:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:36.487 09:34:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:36.487 09:34:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:36.487 09:34:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:36.487 09:34:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:36.487 09:34:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:36.487 09:34:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:36.487 09:34:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:36.487 09:34:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:36.487 09:34:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:36.487 09:34:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:36.487 09:34:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:36.487 09:34:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:36.487 09:34:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:36.487 09:34:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:36.487 09:34:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:36.487 09:34:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:36.487 09:34:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57703 00:06:36.487 09:34:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:36.487 09:34:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57703 /var/tmp/spdk_tgt.sock 00:06:36.487 09:34:24 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:36.487 09:34:24 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57703 ']' 00:06:36.487 09:34:24 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:36.487 09:34:24 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.487 09:34:24 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:36.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:36.487 09:34:24 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.487 09:34:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:36.745 [2024-11-19 09:34:24.188293] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:36.745 [2024-11-19 09:34:24.188659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57703 ] 00:06:37.004 [2024-11-19 09:34:24.619264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.263 [2024-11-19 09:34:24.678511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.263 [2024-11-19 09:34:24.715712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.829 09:34:25 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.829 09:34:25 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:37.829 09:34:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:37.829 00:06:37.829 09:34:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:37.829 INFO: shutting down applications... 00:06:37.829 09:34:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:37.829 09:34:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:37.829 09:34:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:37.829 09:34:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57703 ]] 00:06:37.829 09:34:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57703 00:06:37.829 09:34:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:37.829 09:34:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.829 09:34:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57703 00:06:37.829 09:34:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:38.397 09:34:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:38.397 09:34:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.397 09:34:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57703 00:06:38.397 09:34:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:38.397 09:34:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:38.397 SPDK target shutdown done 00:06:38.397 Success 00:06:38.397 09:34:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:38.397 09:34:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:38.397 09:34:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:38.397 00:06:38.397 real 0m1.849s 00:06:38.397 user 0m1.814s 00:06:38.397 sys 0m0.475s 00:06:38.398 ************************************ 00:06:38.398 END TEST json_config_extra_key 00:06:38.398 ************************************ 00:06:38.398 09:34:25 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.398 09:34:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:38.398 09:34:25 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:38.398 09:34:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.398 09:34:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.398 09:34:25 -- common/autotest_common.sh@10 -- # set +x 00:06:38.398 ************************************ 00:06:38.398 START TEST alias_rpc 00:06:38.398 ************************************ 00:06:38.398 09:34:25 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:38.398 * Looking for test storage... 00:06:38.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:38.398 09:34:25 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.398 09:34:25 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.398 09:34:25 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.398 09:34:25 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.398 09:34:25 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:38.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.398 09:34:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.398 09:34:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.398 09:34:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.398 09:34:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:38.398 09:34:26 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.398 09:34:26 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.398 --rc genhtml_branch_coverage=1 00:06:38.398 --rc genhtml_function_coverage=1 00:06:38.398 --rc genhtml_legend=1 00:06:38.398 --rc geninfo_all_blocks=1 00:06:38.398 --rc geninfo_unexecuted_blocks=1 00:06:38.398 00:06:38.398 ' 00:06:38.398 09:34:26 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.398 --rc genhtml_branch_coverage=1 00:06:38.398 --rc genhtml_function_coverage=1 00:06:38.398 --rc genhtml_legend=1 00:06:38.398 --rc geninfo_all_blocks=1 00:06:38.398 --rc geninfo_unexecuted_blocks=1 00:06:38.398 00:06:38.398 ' 00:06:38.398 09:34:26 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.398 --rc genhtml_branch_coverage=1 00:06:38.398 --rc genhtml_function_coverage=1 00:06:38.398 --rc genhtml_legend=1 00:06:38.398 --rc geninfo_all_blocks=1 00:06:38.398 --rc geninfo_unexecuted_blocks=1 00:06:38.398 00:06:38.398 ' 00:06:38.398 09:34:26 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.398 --rc genhtml_branch_coverage=1 00:06:38.398 --rc genhtml_function_coverage=1 00:06:38.398 --rc genhtml_legend=1 00:06:38.398 --rc geninfo_all_blocks=1 00:06:38.398 --rc geninfo_unexecuted_blocks=1 00:06:38.398 00:06:38.398 ' 00:06:38.398 09:34:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.398 09:34:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57781 00:06:38.398 09:34:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:38.398 09:34:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57781 00:06:38.398 09:34:26 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57781 ']' 00:06:38.398 09:34:26 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.398 09:34:26 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.398 09:34:26 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.398 09:34:26 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.398 09:34:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.657 [2024-11-19 09:34:26.077696] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:38.657 [2024-11-19 09:34:26.077975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57781 ] 00:06:38.657 [2024-11-19 09:34:26.220125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.916 [2024-11-19 09:34:26.284070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.916 [2024-11-19 09:34:26.355902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.175 09:34:26 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.175 09:34:26 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.175 09:34:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:39.434 09:34:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57781 00:06:39.434 09:34:26 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57781 ']' 00:06:39.434 09:34:26 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57781 00:06:39.434 09:34:26 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:39.434 09:34:26 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.434 09:34:26 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57781 00:06:39.434 09:34:26 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.434 09:34:26 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.434 killing process with pid 57781 00:06:39.434 09:34:26 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57781' 00:06:39.434 09:34:26 alias_rpc -- common/autotest_common.sh@973 -- # kill 57781 00:06:39.434 09:34:26 alias_rpc -- common/autotest_common.sh@978 -- # wait 57781 00:06:40.002 ************************************ 00:06:40.002 END TEST alias_rpc 00:06:40.002 ************************************ 00:06:40.002 00:06:40.002 real 0m1.500s 00:06:40.002 user 0m1.599s 00:06:40.002 sys 0m0.441s 00:06:40.002 09:34:27 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.002 09:34:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.002 09:34:27 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:40.002 09:34:27 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:40.002 09:34:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.002 09:34:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.002 09:34:27 -- common/autotest_common.sh@10 -- # set +x 00:06:40.002 ************************************ 00:06:40.002 START TEST spdkcli_tcp 00:06:40.002 ************************************ 00:06:40.002 09:34:27 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:40.002 * Looking for test storage... 00:06:40.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:40.002 09:34:27 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.002 09:34:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.002 09:34:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.002 09:34:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.002 09:34:27 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.002 09:34:27 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.002 09:34:27 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.002 09:34:27 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.002 09:34:27 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.002 09:34:27 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.002 09:34:27 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.002 09:34:27 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.003 09:34:27 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:40.003 09:34:27 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.003 09:34:27 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.003 --rc genhtml_branch_coverage=1 00:06:40.003 --rc genhtml_function_coverage=1 00:06:40.003 --rc genhtml_legend=1 00:06:40.003 --rc geninfo_all_blocks=1 00:06:40.003 --rc geninfo_unexecuted_blocks=1 00:06:40.003 00:06:40.003 ' 00:06:40.003 09:34:27 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.003 --rc genhtml_branch_coverage=1 00:06:40.003 --rc genhtml_function_coverage=1 00:06:40.003 --rc genhtml_legend=1 00:06:40.003 --rc geninfo_all_blocks=1 00:06:40.003 --rc geninfo_unexecuted_blocks=1 00:06:40.003 00:06:40.003 ' 00:06:40.003 09:34:27 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.003 --rc genhtml_branch_coverage=1 00:06:40.003 --rc genhtml_function_coverage=1 00:06:40.003 --rc genhtml_legend=1 00:06:40.003 --rc geninfo_all_blocks=1 00:06:40.003 --rc geninfo_unexecuted_blocks=1 00:06:40.003 00:06:40.003 ' 00:06:40.003 09:34:27 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.003 --rc genhtml_branch_coverage=1 00:06:40.003 --rc genhtml_function_coverage=1 00:06:40.003 --rc genhtml_legend=1 00:06:40.003 --rc geninfo_all_blocks=1 00:06:40.003 --rc geninfo_unexecuted_blocks=1 00:06:40.003 00:06:40.003 ' 00:06:40.003 09:34:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:40.003 09:34:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:40.003 09:34:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:40.003 09:34:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:40.003 09:34:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:40.003 09:34:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:40.003 09:34:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:40.003 09:34:27 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.003 09:34:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.003 09:34:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57852 00:06:40.003 09:34:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:40.003 09:34:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57852 00:06:40.003 09:34:27 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57852 ']' 00:06:40.003 09:34:27 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.003 09:34:27 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.003 09:34:27 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.003 09:34:27 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.003 09:34:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.261 [2024-11-19 09:34:27.661701] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:40.261 [2024-11-19 09:34:27.662055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57852 ] 00:06:40.261 [2024-11-19 09:34:27.811701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.520 [2024-11-19 09:34:27.893929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.520 [2024-11-19 09:34:27.893943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.520 [2024-11-19 09:34:27.969318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.778 09:34:28 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.778 09:34:28 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:40.778 09:34:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57867 00:06:40.778 09:34:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:40.778 09:34:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:41.037 [ 00:06:41.037 "bdev_malloc_delete", 00:06:41.037 "bdev_malloc_create", 00:06:41.037 "bdev_null_resize", 00:06:41.037 "bdev_null_delete", 00:06:41.037 "bdev_null_create", 00:06:41.037 "bdev_nvme_cuse_unregister", 00:06:41.037 "bdev_nvme_cuse_register", 00:06:41.037 "bdev_opal_new_user", 00:06:41.037 "bdev_opal_set_lock_state", 00:06:41.037 "bdev_opal_delete", 00:06:41.037 "bdev_opal_get_info", 00:06:41.037 "bdev_opal_create", 00:06:41.037 "bdev_nvme_opal_revert", 00:06:41.037 "bdev_nvme_opal_init", 00:06:41.037 "bdev_nvme_send_cmd", 00:06:41.037 "bdev_nvme_set_keys", 00:06:41.037 "bdev_nvme_get_path_iostat", 00:06:41.037 "bdev_nvme_get_mdns_discovery_info", 00:06:41.037 "bdev_nvme_stop_mdns_discovery", 00:06:41.037 "bdev_nvme_start_mdns_discovery", 00:06:41.037 "bdev_nvme_set_multipath_policy", 00:06:41.037 "bdev_nvme_set_preferred_path", 00:06:41.037 "bdev_nvme_get_io_paths", 00:06:41.037 "bdev_nvme_remove_error_injection", 00:06:41.037 "bdev_nvme_add_error_injection", 00:06:41.037 "bdev_nvme_get_discovery_info", 00:06:41.037 "bdev_nvme_stop_discovery", 00:06:41.037 "bdev_nvme_start_discovery", 00:06:41.037 "bdev_nvme_get_controller_health_info", 00:06:41.037 "bdev_nvme_disable_controller", 00:06:41.037 "bdev_nvme_enable_controller", 00:06:41.037 "bdev_nvme_reset_controller", 00:06:41.037 "bdev_nvme_get_transport_statistics", 00:06:41.037 "bdev_nvme_apply_firmware", 00:06:41.037 "bdev_nvme_detach_controller", 00:06:41.037 "bdev_nvme_get_controllers", 00:06:41.037 "bdev_nvme_attach_controller", 00:06:41.037 "bdev_nvme_set_hotplug", 00:06:41.037 "bdev_nvme_set_options", 00:06:41.037 "bdev_passthru_delete", 00:06:41.037 "bdev_passthru_create", 00:06:41.037 "bdev_lvol_set_parent_bdev", 00:06:41.037 "bdev_lvol_set_parent", 00:06:41.037 "bdev_lvol_check_shallow_copy", 00:06:41.037 "bdev_lvol_start_shallow_copy", 00:06:41.037 "bdev_lvol_grow_lvstore", 00:06:41.037 "bdev_lvol_get_lvols", 00:06:41.037 "bdev_lvol_get_lvstores", 00:06:41.037 "bdev_lvol_delete", 00:06:41.037 "bdev_lvol_set_read_only", 00:06:41.037 "bdev_lvol_resize", 00:06:41.037 "bdev_lvol_decouple_parent", 00:06:41.037 "bdev_lvol_inflate", 00:06:41.037 "bdev_lvol_rename", 00:06:41.037 "bdev_lvol_clone_bdev", 00:06:41.037 "bdev_lvol_clone", 00:06:41.037 "bdev_lvol_snapshot", 00:06:41.037 "bdev_lvol_create", 00:06:41.037 "bdev_lvol_delete_lvstore", 00:06:41.037 "bdev_lvol_rename_lvstore", 00:06:41.037 "bdev_lvol_create_lvstore", 00:06:41.037 "bdev_raid_set_options", 00:06:41.037 "bdev_raid_remove_base_bdev", 00:06:41.037 "bdev_raid_add_base_bdev", 00:06:41.037 "bdev_raid_delete", 00:06:41.037 "bdev_raid_create", 00:06:41.037 "bdev_raid_get_bdevs", 00:06:41.037 "bdev_error_inject_error", 00:06:41.037 "bdev_error_delete", 00:06:41.037 "bdev_error_create", 00:06:41.037 "bdev_split_delete", 00:06:41.037 "bdev_split_create", 00:06:41.037 "bdev_delay_delete", 00:06:41.037 "bdev_delay_create", 00:06:41.037 "bdev_delay_update_latency", 00:06:41.037 "bdev_zone_block_delete", 00:06:41.037 "bdev_zone_block_create", 00:06:41.037 "blobfs_create", 00:06:41.037 "blobfs_detect", 00:06:41.037 "blobfs_set_cache_size", 00:06:41.037 "bdev_aio_delete", 00:06:41.037 "bdev_aio_rescan", 00:06:41.037 "bdev_aio_create", 00:06:41.037 "bdev_ftl_set_property", 00:06:41.037 "bdev_ftl_get_properties", 00:06:41.037 "bdev_ftl_get_stats", 00:06:41.037 "bdev_ftl_unmap", 00:06:41.037 "bdev_ftl_unload", 00:06:41.037 "bdev_ftl_delete", 00:06:41.037 "bdev_ftl_load", 00:06:41.037 "bdev_ftl_create", 00:06:41.037 "bdev_virtio_attach_controller", 00:06:41.037 "bdev_virtio_scsi_get_devices", 00:06:41.037 "bdev_virtio_detach_controller", 00:06:41.037 "bdev_virtio_blk_set_hotplug", 00:06:41.037 "bdev_iscsi_delete", 00:06:41.037 "bdev_iscsi_create", 00:06:41.037 "bdev_iscsi_set_options", 00:06:41.037 "bdev_uring_delete", 00:06:41.037 "bdev_uring_rescan", 00:06:41.037 "bdev_uring_create", 00:06:41.037 "accel_error_inject_error", 00:06:41.037 "ioat_scan_accel_module", 00:06:41.037 "dsa_scan_accel_module", 00:06:41.037 "iaa_scan_accel_module", 00:06:41.037 "keyring_file_remove_key", 00:06:41.037 "keyring_file_add_key", 00:06:41.037 "keyring_linux_set_options", 00:06:41.037 "fsdev_aio_delete", 00:06:41.037 "fsdev_aio_create", 00:06:41.037 "iscsi_get_histogram", 00:06:41.037 "iscsi_enable_histogram", 00:06:41.037 "iscsi_set_options", 00:06:41.037 "iscsi_get_auth_groups", 00:06:41.037 "iscsi_auth_group_remove_secret", 00:06:41.037 "iscsi_auth_group_add_secret", 00:06:41.037 "iscsi_delete_auth_group", 00:06:41.037 "iscsi_create_auth_group", 00:06:41.037 "iscsi_set_discovery_auth", 00:06:41.037 "iscsi_get_options", 00:06:41.037 "iscsi_target_node_request_logout", 00:06:41.037 "iscsi_target_node_set_redirect", 00:06:41.037 "iscsi_target_node_set_auth", 00:06:41.037 "iscsi_target_node_add_lun", 00:06:41.037 "iscsi_get_stats", 00:06:41.037 "iscsi_get_connections", 00:06:41.037 "iscsi_portal_group_set_auth", 00:06:41.037 "iscsi_start_portal_group", 00:06:41.037 "iscsi_delete_portal_group", 00:06:41.037 "iscsi_create_portal_group", 00:06:41.037 "iscsi_get_portal_groups", 00:06:41.037 "iscsi_delete_target_node", 00:06:41.037 "iscsi_target_node_remove_pg_ig_maps", 00:06:41.037 "iscsi_target_node_add_pg_ig_maps", 00:06:41.037 "iscsi_create_target_node", 00:06:41.037 "iscsi_get_target_nodes", 00:06:41.037 "iscsi_delete_initiator_group", 00:06:41.037 "iscsi_initiator_group_remove_initiators", 00:06:41.037 "iscsi_initiator_group_add_initiators", 00:06:41.037 "iscsi_create_initiator_group", 00:06:41.037 "iscsi_get_initiator_groups", 00:06:41.037 "nvmf_set_crdt", 00:06:41.037 "nvmf_set_config", 00:06:41.037 "nvmf_set_max_subsystems", 00:06:41.037 "nvmf_stop_mdns_prr", 00:06:41.037 "nvmf_publish_mdns_prr", 00:06:41.037 "nvmf_subsystem_get_listeners", 00:06:41.037 "nvmf_subsystem_get_qpairs", 00:06:41.037 "nvmf_subsystem_get_controllers", 00:06:41.037 "nvmf_get_stats", 00:06:41.037 "nvmf_get_transports", 00:06:41.037 "nvmf_create_transport", 00:06:41.037 "nvmf_get_targets", 00:06:41.037 "nvmf_delete_target", 00:06:41.037 "nvmf_create_target", 00:06:41.037 "nvmf_subsystem_allow_any_host", 00:06:41.037 "nvmf_subsystem_set_keys", 00:06:41.037 "nvmf_subsystem_remove_host", 00:06:41.037 "nvmf_subsystem_add_host", 00:06:41.037 "nvmf_ns_remove_host", 00:06:41.037 "nvmf_ns_add_host", 00:06:41.037 "nvmf_subsystem_remove_ns", 00:06:41.037 "nvmf_subsystem_set_ns_ana_group", 00:06:41.037 "nvmf_subsystem_add_ns", 00:06:41.037 "nvmf_subsystem_listener_set_ana_state", 00:06:41.037 "nvmf_discovery_get_referrals", 00:06:41.037 "nvmf_discovery_remove_referral", 00:06:41.037 "nvmf_discovery_add_referral", 00:06:41.037 "nvmf_subsystem_remove_listener", 00:06:41.037 "nvmf_subsystem_add_listener", 00:06:41.037 "nvmf_delete_subsystem", 00:06:41.037 "nvmf_create_subsystem", 00:06:41.037 "nvmf_get_subsystems", 00:06:41.037 "env_dpdk_get_mem_stats", 00:06:41.037 "nbd_get_disks", 00:06:41.037 "nbd_stop_disk", 00:06:41.037 "nbd_start_disk", 00:06:41.037 "ublk_recover_disk", 00:06:41.037 "ublk_get_disks", 00:06:41.037 "ublk_stop_disk", 00:06:41.037 "ublk_start_disk", 00:06:41.037 "ublk_destroy_target", 00:06:41.037 "ublk_create_target", 00:06:41.038 "virtio_blk_create_transport", 00:06:41.038 "virtio_blk_get_transports", 00:06:41.038 "vhost_controller_set_coalescing", 00:06:41.038 "vhost_get_controllers", 00:06:41.038 "vhost_delete_controller", 00:06:41.038 "vhost_create_blk_controller", 00:06:41.038 "vhost_scsi_controller_remove_target", 00:06:41.038 "vhost_scsi_controller_add_target", 00:06:41.038 "vhost_start_scsi_controller", 00:06:41.038 "vhost_create_scsi_controller", 00:06:41.038 "thread_set_cpumask", 00:06:41.038 "scheduler_set_options", 00:06:41.038 "framework_get_governor", 00:06:41.038 "framework_get_scheduler", 00:06:41.038 "framework_set_scheduler", 00:06:41.038 "framework_get_reactors", 00:06:41.038 "thread_get_io_channels", 00:06:41.038 "thread_get_pollers", 00:06:41.038 "thread_get_stats", 00:06:41.038 "framework_monitor_context_switch", 00:06:41.038 "spdk_kill_instance", 00:06:41.038 "log_enable_timestamps", 00:06:41.038 "log_get_flags", 00:06:41.038 "log_clear_flag", 00:06:41.038 "log_set_flag", 00:06:41.038 "log_get_level", 00:06:41.038 "log_set_level", 00:06:41.038 "log_get_print_level", 00:06:41.038 "log_set_print_level", 00:06:41.038 "framework_enable_cpumask_locks", 00:06:41.038 "framework_disable_cpumask_locks", 00:06:41.038 "framework_wait_init", 00:06:41.038 "framework_start_init", 00:06:41.038 "scsi_get_devices", 00:06:41.038 "bdev_get_histogram", 00:06:41.038 "bdev_enable_histogram", 00:06:41.038 "bdev_set_qos_limit", 00:06:41.038 "bdev_set_qd_sampling_period", 00:06:41.038 "bdev_get_bdevs", 00:06:41.038 "bdev_reset_iostat", 00:06:41.038 "bdev_get_iostat", 00:06:41.038 "bdev_examine", 00:06:41.038 "bdev_wait_for_examine", 00:06:41.038 "bdev_set_options", 00:06:41.038 "accel_get_stats", 00:06:41.038 "accel_set_options", 00:06:41.038 "accel_set_driver", 00:06:41.038 "accel_crypto_key_destroy", 00:06:41.038 "accel_crypto_keys_get", 00:06:41.038 "accel_crypto_key_create", 00:06:41.038 "accel_assign_opc", 00:06:41.038 "accel_get_module_info", 00:06:41.038 "accel_get_opc_assignments", 00:06:41.038 "vmd_rescan", 00:06:41.038 "vmd_remove_device", 00:06:41.038 "vmd_enable", 00:06:41.038 "sock_get_default_impl", 00:06:41.038 "sock_set_default_impl", 00:06:41.038 "sock_impl_set_options", 00:06:41.038 "sock_impl_get_options", 00:06:41.038 "iobuf_get_stats", 00:06:41.038 "iobuf_set_options", 00:06:41.038 "keyring_get_keys", 00:06:41.038 "framework_get_pci_devices", 00:06:41.038 "framework_get_config", 00:06:41.038 "framework_get_subsystems", 00:06:41.038 "fsdev_set_opts", 00:06:41.038 "fsdev_get_opts", 00:06:41.038 "trace_get_info", 00:06:41.038 "trace_get_tpoint_group_mask", 00:06:41.038 "trace_disable_tpoint_group", 00:06:41.038 "trace_enable_tpoint_group", 00:06:41.038 "trace_clear_tpoint_mask", 00:06:41.038 "trace_set_tpoint_mask", 00:06:41.038 "notify_get_notifications", 00:06:41.038 "notify_get_types", 00:06:41.038 "spdk_get_version", 00:06:41.038 "rpc_get_methods" 00:06:41.038 ] 00:06:41.038 09:34:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:41.038 09:34:28 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.038 09:34:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.038 09:34:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:41.038 09:34:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57852 00:06:41.038 09:34:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57852 ']' 00:06:41.038 09:34:28 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57852 00:06:41.038 09:34:28 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:41.038 09:34:28 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.038 09:34:28 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57852 00:06:41.038 killing process with pid 57852 00:06:41.038 09:34:28 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.038 09:34:28 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.038 09:34:28 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57852' 00:06:41.038 09:34:28 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57852 00:06:41.038 09:34:28 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57852 00:06:41.297 ************************************ 00:06:41.297 END TEST spdkcli_tcp 00:06:41.297 ************************************ 00:06:41.297 00:06:41.297 real 0m1.546s 00:06:41.297 user 0m2.545s 00:06:41.297 sys 0m0.493s 00:06:41.297 09:34:28 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.297 09:34:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.556 09:34:28 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:41.556 09:34:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.556 09:34:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.556 09:34:28 -- common/autotest_common.sh@10 -- # set +x 00:06:41.556 ************************************ 00:06:41.556 START TEST dpdk_mem_utility 00:06:41.556 ************************************ 00:06:41.556 09:34:28 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:41.556 * Looking for test storage... 00:06:41.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.556 09:34:29 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.556 --rc genhtml_branch_coverage=1 00:06:41.556 --rc genhtml_function_coverage=1 00:06:41.556 --rc genhtml_legend=1 00:06:41.556 --rc geninfo_all_blocks=1 00:06:41.556 --rc geninfo_unexecuted_blocks=1 00:06:41.556 00:06:41.556 ' 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.556 --rc genhtml_branch_coverage=1 00:06:41.556 --rc genhtml_function_coverage=1 00:06:41.556 --rc genhtml_legend=1 00:06:41.556 --rc geninfo_all_blocks=1 00:06:41.556 --rc geninfo_unexecuted_blocks=1 00:06:41.556 00:06:41.556 ' 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.556 --rc genhtml_branch_coverage=1 00:06:41.556 --rc genhtml_function_coverage=1 00:06:41.556 --rc genhtml_legend=1 00:06:41.556 --rc geninfo_all_blocks=1 00:06:41.556 --rc geninfo_unexecuted_blocks=1 00:06:41.556 00:06:41.556 ' 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.556 --rc genhtml_branch_coverage=1 00:06:41.556 --rc genhtml_function_coverage=1 00:06:41.556 --rc genhtml_legend=1 00:06:41.556 --rc geninfo_all_blocks=1 00:06:41.556 --rc geninfo_unexecuted_blocks=1 00:06:41.556 00:06:41.556 ' 00:06:41.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.556 09:34:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:41.556 09:34:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57949 00:06:41.556 09:34:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57949 00:06:41.556 09:34:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57949 ']' 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.556 09:34:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.814 [2024-11-19 09:34:29.182662] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:41.814 [2024-11-19 09:34:29.182969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57949 ] 00:06:41.814 [2024-11-19 09:34:29.326591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.814 [2024-11-19 09:34:29.392286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.071 [2024-11-19 09:34:29.470187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.637 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.637 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:42.637 09:34:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:42.637 09:34:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:42.637 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.637 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:42.637 { 00:06:42.637 "filename": "/tmp/spdk_mem_dump.txt" 00:06:42.637 } 00:06:42.637 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.637 09:34:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:42.897 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:42.897 1 heaps totaling size 810.000000 MiB 00:06:42.897 size: 810.000000 MiB heap id: 0 00:06:42.897 end heaps---------- 00:06:42.897 9 mempools totaling size 595.772034 MiB 00:06:42.897 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:42.897 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:42.897 size: 92.545471 MiB name: bdev_io_57949 00:06:42.897 size: 50.003479 MiB name: msgpool_57949 00:06:42.897 size: 36.509338 MiB name: fsdev_io_57949 00:06:42.897 size: 21.763794 MiB name: PDU_Pool 00:06:42.897 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:42.897 size: 4.133484 MiB name: evtpool_57949 00:06:42.897 size: 0.026123 MiB name: Session_Pool 00:06:42.897 end mempools------- 00:06:42.897 6 memzones totaling size 4.142822 MiB 00:06:42.897 size: 1.000366 MiB name: RG_ring_0_57949 00:06:42.897 size: 1.000366 MiB name: RG_ring_1_57949 00:06:42.897 size: 1.000366 MiB name: RG_ring_4_57949 00:06:42.897 size: 1.000366 MiB name: RG_ring_5_57949 00:06:42.897 size: 0.125366 MiB name: RG_ring_2_57949 00:06:42.897 size: 0.015991 MiB name: RG_ring_3_57949 00:06:42.897 end memzones------- 00:06:42.897 09:34:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:42.897 heap id: 0 total size: 810.000000 MiB number of busy elements: 319 number of free elements: 15 00:06:42.897 list of free elements. size: 10.812134 MiB 00:06:42.897 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:42.897 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:42.897 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:42.897 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:42.897 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:42.897 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:42.897 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:42.897 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:42.897 element at address: 0x20001a600000 with size: 0.566589 MiB 00:06:42.897 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:42.897 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:42.897 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:42.897 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:42.897 element at address: 0x200027a00000 with size: 0.395752 MiB 00:06:42.897 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:42.897 list of standard malloc elements. size: 199.268982 MiB 00:06:42.897 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:42.897 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:42.897 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:42.897 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:42.897 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:42.897 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:42.897 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:42.897 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:42.897 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:42.897 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:42.897 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:42.898 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:42.898 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:42.898 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:42.898 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:42.898 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:42.898 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:42.898 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:42.898 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a6910c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691180 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691240 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691300 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691480 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691540 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691600 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:42.898 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:42.899 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a65500 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:42.899 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:42.899 list of memzone associated elements. size: 599.918884 MiB 00:06:42.899 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:42.899 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:42.899 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:42.899 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:42.899 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:42.899 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57949_0 00:06:42.899 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:42.899 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57949_0 00:06:42.899 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:42.899 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57949_0 00:06:42.899 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:42.899 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:42.899 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:42.899 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:42.899 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:42.899 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57949_0 00:06:42.899 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:42.900 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57949 00:06:42.900 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:42.900 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57949 00:06:42.900 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:42.900 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:42.900 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:42.900 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:42.900 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:42.900 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:42.900 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:42.900 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:42.900 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:42.900 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57949 00:06:42.900 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:42.900 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57949 00:06:42.900 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:42.900 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57949 00:06:42.900 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:42.900 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57949 00:06:42.900 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:42.900 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57949 00:06:42.900 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:42.900 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57949 00:06:42.900 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:42.900 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:42.900 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:42.900 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:42.900 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:42.900 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:42.900 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:42.900 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57949 00:06:42.900 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:42.900 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57949 00:06:42.900 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:42.900 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:42.900 element at address: 0x200027a65680 with size: 0.023743 MiB 00:06:42.900 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:42.900 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:42.900 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57949 00:06:42.900 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:06:42.900 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:42.900 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:42.900 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57949 00:06:42.900 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:42.900 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57949 00:06:42.900 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:42.900 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57949 00:06:42.900 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:06:42.900 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:42.900 09:34:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:42.900 09:34:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57949 00:06:42.900 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57949 ']' 00:06:42.900 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57949 00:06:42.900 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:42.900 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.900 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57949 00:06:42.900 killing process with pid 57949 00:06:42.900 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.900 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.900 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57949' 00:06:42.900 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57949 00:06:42.900 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57949 00:06:43.159 ************************************ 00:06:43.159 END TEST dpdk_mem_utility 00:06:43.159 ************************************ 00:06:43.159 00:06:43.159 real 0m1.814s 00:06:43.159 user 0m1.990s 00:06:43.159 sys 0m0.439s 00:06:43.159 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.159 09:34:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:43.418 09:34:30 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:43.418 09:34:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.418 09:34:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.418 09:34:30 -- common/autotest_common.sh@10 -- # set +x 00:06:43.418 ************************************ 00:06:43.418 START TEST event 00:06:43.418 ************************************ 00:06:43.418 09:34:30 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:43.418 * Looking for test storage... 00:06:43.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:43.418 09:34:30 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.418 09:34:30 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.418 09:34:30 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.418 09:34:31 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.418 09:34:31 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.418 09:34:31 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.418 09:34:31 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.418 09:34:31 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.418 09:34:31 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.418 09:34:31 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.418 09:34:31 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.418 09:34:31 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.418 09:34:31 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.418 09:34:31 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.418 09:34:31 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.418 09:34:31 event -- scripts/common.sh@344 -- # case "$op" in 00:06:43.418 09:34:31 event -- scripts/common.sh@345 -- # : 1 00:06:43.418 09:34:31 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.418 09:34:31 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.418 09:34:31 event -- scripts/common.sh@365 -- # decimal 1 00:06:43.418 09:34:31 event -- scripts/common.sh@353 -- # local d=1 00:06:43.418 09:34:31 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.418 09:34:31 event -- scripts/common.sh@355 -- # echo 1 00:06:43.418 09:34:31 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.418 09:34:31 event -- scripts/common.sh@366 -- # decimal 2 00:06:43.418 09:34:31 event -- scripts/common.sh@353 -- # local d=2 00:06:43.418 09:34:31 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.418 09:34:31 event -- scripts/common.sh@355 -- # echo 2 00:06:43.418 09:34:31 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.418 09:34:31 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.418 09:34:31 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.418 09:34:31 event -- scripts/common.sh@368 -- # return 0 00:06:43.418 09:34:31 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.418 09:34:31 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.418 --rc genhtml_branch_coverage=1 00:06:43.418 --rc genhtml_function_coverage=1 00:06:43.418 --rc genhtml_legend=1 00:06:43.418 --rc geninfo_all_blocks=1 00:06:43.418 --rc geninfo_unexecuted_blocks=1 00:06:43.418 00:06:43.418 ' 00:06:43.418 09:34:31 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.418 --rc genhtml_branch_coverage=1 00:06:43.418 --rc genhtml_function_coverage=1 00:06:43.418 --rc genhtml_legend=1 00:06:43.418 --rc geninfo_all_blocks=1 00:06:43.418 --rc geninfo_unexecuted_blocks=1 00:06:43.418 00:06:43.418 ' 00:06:43.418 09:34:31 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.418 --rc genhtml_branch_coverage=1 00:06:43.418 --rc genhtml_function_coverage=1 00:06:43.418 --rc genhtml_legend=1 00:06:43.418 --rc geninfo_all_blocks=1 00:06:43.418 --rc geninfo_unexecuted_blocks=1 00:06:43.418 00:06:43.418 ' 00:06:43.418 09:34:31 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.418 --rc genhtml_branch_coverage=1 00:06:43.418 --rc genhtml_function_coverage=1 00:06:43.418 --rc genhtml_legend=1 00:06:43.418 --rc geninfo_all_blocks=1 00:06:43.418 --rc geninfo_unexecuted_blocks=1 00:06:43.418 00:06:43.418 ' 00:06:43.418 09:34:31 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:43.418 09:34:31 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:43.418 09:34:31 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:43.419 09:34:31 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:43.419 09:34:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.419 09:34:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.736 ************************************ 00:06:43.736 START TEST event_perf 00:06:43.736 ************************************ 00:06:43.736 09:34:31 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:43.736 Running I/O for 1 seconds...[2024-11-19 09:34:31.064771] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:43.737 [2024-11-19 09:34:31.064878] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58035 ] 00:06:43.737 [2024-11-19 09:34:31.210352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.737 [2024-11-19 09:34:31.284296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.737 [2024-11-19 09:34:31.284454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.737 [2024-11-19 09:34:31.284456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.737 Running I/O for 1 seconds...[2024-11-19 09:34:31.284378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.116 00:06:45.116 lcore 0: 200707 00:06:45.116 lcore 1: 200708 00:06:45.116 lcore 2: 200707 00:06:45.116 lcore 3: 200708 00:06:45.116 done. 00:06:45.116 ************************************ 00:06:45.116 END TEST event_perf 00:06:45.116 ************************************ 00:06:45.116 00:06:45.116 real 0m1.296s 00:06:45.116 user 0m4.124s 00:06:45.116 sys 0m0.050s 00:06:45.116 09:34:32 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.116 09:34:32 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.116 09:34:32 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:45.116 09:34:32 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:45.116 09:34:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.116 09:34:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.116 ************************************ 00:06:45.116 START TEST event_reactor 00:06:45.116 ************************************ 00:06:45.116 09:34:32 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:45.116 [2024-11-19 09:34:32.406697] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:45.116 [2024-11-19 09:34:32.406809] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58068 ] 00:06:45.116 [2024-11-19 09:34:32.549961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.116 [2024-11-19 09:34:32.615138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.052 test_start 00:06:46.052 oneshot 00:06:46.052 tick 100 00:06:46.052 tick 100 00:06:46.052 tick 250 00:06:46.052 tick 100 00:06:46.052 tick 100 00:06:46.052 tick 250 00:06:46.052 tick 100 00:06:46.052 tick 500 00:06:46.052 tick 100 00:06:46.052 tick 100 00:06:46.052 tick 250 00:06:46.052 tick 100 00:06:46.052 tick 100 00:06:46.052 test_end 00:06:46.052 00:06:46.052 real 0m1.276s 00:06:46.052 user 0m1.131s 00:06:46.052 sys 0m0.038s 00:06:46.052 09:34:33 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.052 ************************************ 00:06:46.052 END TEST event_reactor 00:06:46.052 ************************************ 00:06:46.052 09:34:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:46.311 09:34:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:46.311 09:34:33 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:46.311 09:34:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.311 09:34:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.311 ************************************ 00:06:46.311 START TEST event_reactor_perf 00:06:46.311 ************************************ 00:06:46.311 09:34:33 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:46.311 [2024-11-19 09:34:33.728153] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:46.311 [2024-11-19 09:34:33.728280] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58098 ] 00:06:46.311 [2024-11-19 09:34:33.872766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.570 [2024-11-19 09:34:33.936410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.506 test_start 00:06:47.506 test_end 00:06:47.506 Performance: 375238 events per second 00:06:47.506 00:06:47.506 real 0m1.274s 00:06:47.506 user 0m1.122s 00:06:47.506 sys 0m0.043s 00:06:47.506 09:34:34 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.506 09:34:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.506 ************************************ 00:06:47.506 END TEST event_reactor_perf 00:06:47.506 ************************************ 00:06:47.506 09:34:35 event -- event/event.sh@49 -- # uname -s 00:06:47.506 09:34:35 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:47.506 09:34:35 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:47.506 09:34:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.506 09:34:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.506 09:34:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.506 ************************************ 00:06:47.506 START TEST event_scheduler 00:06:47.506 ************************************ 00:06:47.506 09:34:35 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:47.506 * Looking for test storage... 00:06:47.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:47.506 09:34:35 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.506 09:34:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.506 09:34:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.765 09:34:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.765 09:34:35 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:47.765 09:34:35 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.765 09:34:35 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.765 --rc genhtml_branch_coverage=1 00:06:47.765 --rc genhtml_function_coverage=1 00:06:47.765 --rc genhtml_legend=1 00:06:47.765 --rc geninfo_all_blocks=1 00:06:47.765 --rc geninfo_unexecuted_blocks=1 00:06:47.765 00:06:47.765 ' 00:06:47.765 09:34:35 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.765 --rc genhtml_branch_coverage=1 00:06:47.765 --rc genhtml_function_coverage=1 00:06:47.765 --rc genhtml_legend=1 00:06:47.765 --rc geninfo_all_blocks=1 00:06:47.765 --rc geninfo_unexecuted_blocks=1 00:06:47.765 00:06:47.765 ' 00:06:47.765 09:34:35 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.765 --rc genhtml_branch_coverage=1 00:06:47.765 --rc genhtml_function_coverage=1 00:06:47.765 --rc genhtml_legend=1 00:06:47.765 --rc geninfo_all_blocks=1 00:06:47.765 --rc geninfo_unexecuted_blocks=1 00:06:47.765 00:06:47.765 ' 00:06:47.765 09:34:35 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.765 --rc genhtml_branch_coverage=1 00:06:47.765 --rc genhtml_function_coverage=1 00:06:47.765 --rc genhtml_legend=1 00:06:47.765 --rc geninfo_all_blocks=1 00:06:47.765 --rc geninfo_unexecuted_blocks=1 00:06:47.765 00:06:47.765 ' 00:06:47.765 09:34:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:47.765 09:34:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58173 00:06:47.765 09:34:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:47.765 09:34:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.765 09:34:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58173 00:06:47.765 09:34:35 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58173 ']' 00:06:47.765 09:34:35 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.765 09:34:35 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.765 09:34:35 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.765 09:34:35 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.765 09:34:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.765 [2024-11-19 09:34:35.282932] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:47.765 [2024-11-19 09:34:35.283086] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58173 ] 00:06:48.024 [2024-11-19 09:34:35.431686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.024 [2024-11-19 09:34:35.499889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.024 [2024-11-19 09:34:35.499956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.024 [2024-11-19 09:34:35.500040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.024 [2024-11-19 09:34:35.500045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.964 09:34:36 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.964 09:34:36 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:48.964 09:34:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:48.964 09:34:36 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.964 09:34:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.964 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:48.964 POWER: Cannot set governor of lcore 0 to userspace 00:06:48.964 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:48.964 POWER: Cannot set governor of lcore 0 to performance 00:06:48.964 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:48.964 POWER: Cannot set governor of lcore 0 to userspace 00:06:48.964 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:48.964 POWER: Cannot set governor of lcore 0 to userspace 00:06:48.964 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:48.964 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:48.964 POWER: Unable to set Power Management Environment for lcore 0 00:06:48.964 [2024-11-19 09:34:36.359865] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:48.964 [2024-11-19 09:34:36.359983] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:48.964 [2024-11-19 09:34:36.360024] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:48.964 [2024-11-19 09:34:36.360112] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:48.964 [2024-11-19 09:34:36.360153] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:48.964 [2024-11-19 09:34:36.360258] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:48.964 09:34:36 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.964 09:34:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:48.964 09:34:36 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.964 09:34:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.964 [2024-11-19 09:34:36.419710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.964 [2024-11-19 09:34:36.456959] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:48.964 09:34:36 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.964 09:34:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:48.964 09:34:36 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.964 09:34:36 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.964 09:34:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.964 ************************************ 00:06:48.964 START TEST scheduler_create_thread 00:06:48.964 ************************************ 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.964 2 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.964 3 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.964 4 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:48.964 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.965 5 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.965 6 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.965 7 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.965 8 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.965 9 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.965 10 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.965 09:34:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.343 09:34:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.343 ************************************ 00:06:50.343 END TEST scheduler_create_thread 00:06:50.343 ************************************ 00:06:50.343 00:06:50.343 real 0m1.172s 00:06:50.343 user 0m0.014s 00:06:50.343 sys 0m0.008s 00:06:50.343 09:34:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.343 09:34:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.343 09:34:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:50.343 09:34:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58173 00:06:50.343 09:34:37 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58173 ']' 00:06:50.343 09:34:37 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58173 00:06:50.343 09:34:37 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:50.343 09:34:37 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.343 09:34:37 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58173 00:06:50.343 killing process with pid 58173 00:06:50.343 09:34:37 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:50.343 09:34:37 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:50.343 09:34:37 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58173' 00:06:50.343 09:34:37 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58173 00:06:50.343 09:34:37 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58173 00:06:50.602 [2024-11-19 09:34:38.119646] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:50.861 ************************************ 00:06:50.861 END TEST event_scheduler 00:06:50.861 ************************************ 00:06:50.861 00:06:50.861 real 0m3.271s 00:06:50.861 user 0m6.220s 00:06:50.861 sys 0m0.373s 00:06:50.861 09:34:38 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.861 09:34:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.861 09:34:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:50.861 09:34:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:50.861 09:34:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.861 09:34:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.861 09:34:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.861 ************************************ 00:06:50.861 START TEST app_repeat 00:06:50.861 ************************************ 00:06:50.861 09:34:38 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:50.861 09:34:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.861 09:34:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.861 09:34:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:50.861 09:34:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.861 09:34:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:50.861 09:34:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:50.861 09:34:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:50.861 Process app_repeat pid: 58256 00:06:50.861 09:34:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58256 00:06:50.861 09:34:38 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:50.861 09:34:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.861 09:34:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58256' 00:06:50.861 09:34:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.861 spdk_app_start Round 0 00:06:50.861 09:34:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:50.861 09:34:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58256 /var/tmp/spdk-nbd.sock 00:06:50.861 09:34:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58256 ']' 00:06:50.861 09:34:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.861 09:34:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.861 09:34:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.861 09:34:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.861 09:34:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.861 [2024-11-19 09:34:38.398629] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:06:50.861 [2024-11-19 09:34:38.398763] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58256 ] 00:06:51.119 [2024-11-19 09:34:38.549496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.119 [2024-11-19 09:34:38.613952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.119 [2024-11-19 09:34:38.613964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.119 [2024-11-19 09:34:38.669919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.119 09:34:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.119 09:34:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:51.119 09:34:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.691 Malloc0 00:06:51.691 09:34:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.953 Malloc1 00:06:51.953 09:34:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.953 09:34:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.211 /dev/nbd0 00:06:52.211 09:34:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.211 09:34:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.212 09:34:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:52.212 09:34:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:52.212 09:34:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.212 09:34:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.212 09:34:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:52.212 09:34:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:52.212 09:34:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.212 09:34:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.212 09:34:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.212 1+0 records in 00:06:52.212 1+0 records out 00:06:52.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298619 s, 13.7 MB/s 00:06:52.212 09:34:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.212 09:34:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:52.212 09:34:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.212 09:34:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.212 09:34:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:52.212 09:34:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.212 09:34:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.212 09:34:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.470 /dev/nbd1 00:06:52.470 09:34:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.470 09:34:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.470 09:34:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:52.470 09:34:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:52.470 09:34:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.470 09:34:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.471 09:34:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:52.471 09:34:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:52.471 09:34:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.471 09:34:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.471 09:34:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.471 1+0 records in 00:06:52.471 1+0 records out 00:06:52.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341743 s, 12.0 MB/s 00:06:52.471 09:34:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.471 09:34:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:52.471 09:34:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.471 09:34:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.471 09:34:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:52.471 09:34:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.471 09:34:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.471 09:34:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.471 09:34:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.471 09:34:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.038 09:34:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.038 { 00:06:53.038 "nbd_device": "/dev/nbd0", 00:06:53.038 "bdev_name": "Malloc0" 00:06:53.038 }, 00:06:53.038 { 00:06:53.038 "nbd_device": "/dev/nbd1", 00:06:53.038 "bdev_name": "Malloc1" 00:06:53.038 } 00:06:53.038 ]' 00:06:53.038 09:34:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.038 { 00:06:53.038 "nbd_device": "/dev/nbd0", 00:06:53.038 "bdev_name": "Malloc0" 00:06:53.038 }, 00:06:53.038 { 00:06:53.038 "nbd_device": "/dev/nbd1", 00:06:53.038 "bdev_name": "Malloc1" 00:06:53.038 } 00:06:53.038 ]' 00:06:53.038 09:34:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.038 09:34:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.038 /dev/nbd1' 00:06:53.038 09:34:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.038 /dev/nbd1' 00:06:53.038 09:34:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.038 09:34:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:53.038 09:34:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:53.038 09:34:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:53.039 256+0 records in 00:06:53.039 256+0 records out 00:06:53.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00763031 s, 137 MB/s 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:53.039 256+0 records in 00:06:53.039 256+0 records out 00:06:53.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024742 s, 42.4 MB/s 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:53.039 256+0 records in 00:06:53.039 256+0 records out 00:06:53.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306077 s, 34.3 MB/s 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.039 09:34:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.611 09:34:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.611 09:34:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.611 09:34:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.611 09:34:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.611 09:34:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.611 09:34:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.611 09:34:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.611 09:34:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.611 09:34:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.611 09:34:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.611 09:34:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.611 09:34:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.611 09:34:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.611 09:34:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.611 09:34:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.611 09:34:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.869 09:34:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.869 09:34:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.869 09:34:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.869 09:34:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.869 09:34:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.128 09:34:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.128 09:34:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.128 09:34:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.128 09:34:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.128 09:34:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.128 09:34:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.128 09:34:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:54.128 09:34:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.128 09:34:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.128 09:34:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:54.128 09:34:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:54.128 09:34:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:54.128 09:34:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.386 09:34:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:54.648 [2024-11-19 09:34:42.072571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.649 [2024-11-19 09:34:42.137482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.649 [2024-11-19 09:34:42.137492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.649 [2024-11-19 09:34:42.191388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.649 [2024-11-19 09:34:42.191474] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.649 [2024-11-19 09:34:42.191489] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.933 spdk_app_start Round 1 00:06:57.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.933 09:34:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.933 09:34:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:57.933 09:34:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58256 /var/tmp/spdk-nbd.sock 00:06:57.933 09:34:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58256 ']' 00:06:57.933 09:34:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.934 09:34:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.934 09:34:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.934 09:34:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.934 09:34:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.934 09:34:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.934 09:34:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:57.934 09:34:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.934 Malloc0 00:06:57.934 09:34:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.191 Malloc1 00:06:58.450 09:34:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.450 09:34:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.708 /dev/nbd0 00:06:58.708 09:34:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.708 09:34:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.708 09:34:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:58.708 09:34:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:58.708 09:34:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.708 09:34:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.708 09:34:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:58.708 09:34:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:58.708 09:34:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.708 09:34:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.708 09:34:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.708 1+0 records in 00:06:58.708 1+0 records out 00:06:58.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454794 s, 9.0 MB/s 00:06:58.708 09:34:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.708 09:34:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:58.708 09:34:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.708 09:34:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.708 09:34:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:58.708 09:34:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.708 09:34:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.708 09:34:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:58.967 /dev/nbd1 00:06:58.967 09:34:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.967 09:34:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.967 09:34:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:58.967 09:34:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:58.967 09:34:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.967 09:34:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.967 09:34:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:58.967 09:34:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:58.967 09:34:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.967 09:34:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.967 09:34:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.967 1+0 records in 00:06:58.967 1+0 records out 00:06:58.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391777 s, 10.5 MB/s 00:06:58.967 09:34:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.967 09:34:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:58.967 09:34:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.967 09:34:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.967 09:34:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:58.967 09:34:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.967 09:34:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.967 09:34:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.967 09:34:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.967 09:34:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.237 { 00:06:59.237 "nbd_device": "/dev/nbd0", 00:06:59.237 "bdev_name": "Malloc0" 00:06:59.237 }, 00:06:59.237 { 00:06:59.237 "nbd_device": "/dev/nbd1", 00:06:59.237 "bdev_name": "Malloc1" 00:06:59.237 } 00:06:59.237 ]' 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.237 { 00:06:59.237 "nbd_device": "/dev/nbd0", 00:06:59.237 "bdev_name": "Malloc0" 00:06:59.237 }, 00:06:59.237 { 00:06:59.237 "nbd_device": "/dev/nbd1", 00:06:59.237 "bdev_name": "Malloc1" 00:06:59.237 } 00:06:59.237 ]' 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.237 /dev/nbd1' 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.237 /dev/nbd1' 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.237 256+0 records in 00:06:59.237 256+0 records out 00:06:59.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477304 s, 220 MB/s 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.237 256+0 records in 00:06:59.237 256+0 records out 00:06:59.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028519 s, 36.8 MB/s 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.237 09:34:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.527 256+0 records in 00:06:59.527 256+0 records out 00:06:59.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221315 s, 47.4 MB/s 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.527 09:34:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.791 09:34:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.791 09:34:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.791 09:34:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.791 09:34:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.791 09:34:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.791 09:34:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.791 09:34:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.791 09:34:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.791 09:34:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.791 09:34:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.049 09:34:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.049 09:34:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.049 09:34:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.049 09:34:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.049 09:34:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.049 09:34:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.049 09:34:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.049 09:34:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.049 09:34:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.049 09:34:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.049 09:34:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.307 09:34:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.307 09:34:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.307 09:34:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.565 09:34:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.565 09:34:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.565 09:34:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.565 09:34:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.565 09:34:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.565 09:34:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.565 09:34:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.565 09:34:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.565 09:34:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.565 09:34:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.824 09:34:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:01.083 [2024-11-19 09:34:48.486638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.083 [2024-11-19 09:34:48.550664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.083 [2024-11-19 09:34:48.550673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.083 [2024-11-19 09:34:48.606359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.083 [2024-11-19 09:34:48.606665] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:01.083 [2024-11-19 09:34:48.606688] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:04.369 09:34:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:04.369 spdk_app_start Round 2 00:07:04.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.369 09:34:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:04.369 09:34:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58256 /var/tmp/spdk-nbd.sock 00:07:04.369 09:34:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58256 ']' 00:07:04.369 09:34:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.369 09:34:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.369 09:34:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.369 09:34:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.369 09:34:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.369 09:34:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.369 09:34:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:04.369 09:34:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.369 Malloc0 00:07:04.628 09:34:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.628 Malloc1 00:07:04.887 09:34:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.887 09:34:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:05.146 /dev/nbd0 00:07:05.146 09:34:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.146 09:34:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.146 09:34:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:05.146 09:34:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:05.146 09:34:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.146 09:34:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.146 09:34:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:05.146 09:34:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:05.146 09:34:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.146 09:34:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.146 09:34:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.146 1+0 records in 00:07:05.146 1+0 records out 00:07:05.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383644 s, 10.7 MB/s 00:07:05.146 09:34:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.146 09:34:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:05.146 09:34:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.146 09:34:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.146 09:34:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:05.146 09:34:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.146 09:34:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.146 09:34:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.405 /dev/nbd1 00:07:05.405 09:34:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.405 09:34:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.405 09:34:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:05.405 09:34:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:05.405 09:34:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.405 09:34:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.405 09:34:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:05.405 09:34:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:05.405 09:34:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.405 09:34:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.405 09:34:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.405 1+0 records in 00:07:05.405 1+0 records out 00:07:05.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367694 s, 11.1 MB/s 00:07:05.405 09:34:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.405 09:34:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:05.405 09:34:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.405 09:34:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.405 09:34:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:05.405 09:34:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.405 09:34:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.405 09:34:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.405 09:34:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.405 09:34:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.664 { 00:07:05.664 "nbd_device": "/dev/nbd0", 00:07:05.664 "bdev_name": "Malloc0" 00:07:05.664 }, 00:07:05.664 { 00:07:05.664 "nbd_device": "/dev/nbd1", 00:07:05.664 "bdev_name": "Malloc1" 00:07:05.664 } 00:07:05.664 ]' 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.664 { 00:07:05.664 "nbd_device": "/dev/nbd0", 00:07:05.664 "bdev_name": "Malloc0" 00:07:05.664 }, 00:07:05.664 { 00:07:05.664 "nbd_device": "/dev/nbd1", 00:07:05.664 "bdev_name": "Malloc1" 00:07:05.664 } 00:07:05.664 ]' 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.664 /dev/nbd1' 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.664 /dev/nbd1' 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:05.664 256+0 records in 00:07:05.664 256+0 records out 00:07:05.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00556852 s, 188 MB/s 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.664 09:34:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.923 256+0 records in 00:07:05.923 256+0 records out 00:07:05.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022158 s, 47.3 MB/s 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.923 256+0 records in 00:07:05.923 256+0 records out 00:07:05.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225644 s, 46.5 MB/s 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.923 09:34:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.181 09:34:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.181 09:34:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.181 09:34:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.181 09:34:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.181 09:34:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.181 09:34:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.181 09:34:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.181 09:34:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.181 09:34:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.181 09:34:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.440 09:34:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.440 09:34:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.440 09:34:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.440 09:34:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.440 09:34:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.440 09:34:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.440 09:34:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.440 09:34:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.440 09:34:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.440 09:34:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.440 09:34:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.698 09:34:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.698 09:34:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.698 09:34:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.698 09:34:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.698 09:34:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.698 09:34:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.698 09:34:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:06.698 09:34:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.698 09:34:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.956 09:34:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.956 09:34:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.956 09:34:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.956 09:34:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:07.215 09:34:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:07.215 [2024-11-19 09:34:54.803106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.474 [2024-11-19 09:34:54.866434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.474 [2024-11-19 09:34:54.866444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.474 [2024-11-19 09:34:54.920369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.474 [2024-11-19 09:34:54.920460] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.474 [2024-11-19 09:34:54.920475] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.759 09:34:57 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58256 /var/tmp/spdk-nbd.sock 00:07:10.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.759 09:34:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58256 ']' 00:07:10.759 09:34:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.759 09:34:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.759 09:34:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.759 09:34:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.759 09:34:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.759 09:34:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.759 09:34:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:10.759 09:34:57 event.app_repeat -- event/event.sh@39 -- # killprocess 58256 00:07:10.759 09:34:57 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58256 ']' 00:07:10.759 09:34:57 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58256 00:07:10.759 09:34:57 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:10.759 09:34:57 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.759 09:34:57 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58256 00:07:10.759 killing process with pid 58256 00:07:10.759 09:34:58 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.759 09:34:58 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.759 09:34:58 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58256' 00:07:10.760 09:34:58 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58256 00:07:10.760 09:34:58 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58256 00:07:10.760 spdk_app_start is called in Round 0. 00:07:10.760 Shutdown signal received, stop current app iteration 00:07:10.760 Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 reinitialization... 00:07:10.760 spdk_app_start is called in Round 1. 00:07:10.760 Shutdown signal received, stop current app iteration 00:07:10.760 Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 reinitialization... 00:07:10.760 spdk_app_start is called in Round 2. 00:07:10.760 Shutdown signal received, stop current app iteration 00:07:10.760 Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 reinitialization... 00:07:10.760 spdk_app_start is called in Round 3. 00:07:10.760 Shutdown signal received, stop current app iteration 00:07:10.760 09:34:58 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:10.760 09:34:58 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:10.760 00:07:10.760 real 0m19.834s 00:07:10.760 user 0m45.550s 00:07:10.760 sys 0m2.988s 00:07:10.760 09:34:58 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.760 ************************************ 00:07:10.760 END TEST app_repeat 00:07:10.760 ************************************ 00:07:10.760 09:34:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.760 09:34:58 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:10.760 09:34:58 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:10.760 09:34:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.760 09:34:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.760 09:34:58 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.760 ************************************ 00:07:10.760 START TEST cpu_locks 00:07:10.760 ************************************ 00:07:10.760 09:34:58 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:10.760 * Looking for test storage... 00:07:10.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:10.760 09:34:58 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.760 09:34:58 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.760 09:34:58 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.019 09:34:58 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.019 09:34:58 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:11.019 09:34:58 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.019 09:34:58 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.019 --rc genhtml_branch_coverage=1 00:07:11.019 --rc genhtml_function_coverage=1 00:07:11.019 --rc genhtml_legend=1 00:07:11.019 --rc geninfo_all_blocks=1 00:07:11.019 --rc geninfo_unexecuted_blocks=1 00:07:11.019 00:07:11.019 ' 00:07:11.019 09:34:58 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.019 --rc genhtml_branch_coverage=1 00:07:11.019 --rc genhtml_function_coverage=1 00:07:11.019 --rc genhtml_legend=1 00:07:11.019 --rc geninfo_all_blocks=1 00:07:11.019 --rc geninfo_unexecuted_blocks=1 00:07:11.019 00:07:11.019 ' 00:07:11.019 09:34:58 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.019 --rc genhtml_branch_coverage=1 00:07:11.019 --rc genhtml_function_coverage=1 00:07:11.019 --rc genhtml_legend=1 00:07:11.019 --rc geninfo_all_blocks=1 00:07:11.019 --rc geninfo_unexecuted_blocks=1 00:07:11.019 00:07:11.019 ' 00:07:11.020 09:34:58 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.020 --rc genhtml_branch_coverage=1 00:07:11.020 --rc genhtml_function_coverage=1 00:07:11.020 --rc genhtml_legend=1 00:07:11.020 --rc geninfo_all_blocks=1 00:07:11.020 --rc geninfo_unexecuted_blocks=1 00:07:11.020 00:07:11.020 ' 00:07:11.020 09:34:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:11.020 09:34:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:11.020 09:34:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:11.020 09:34:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:11.020 09:34:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.020 09:34:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.020 09:34:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.020 ************************************ 00:07:11.020 START TEST default_locks 00:07:11.020 ************************************ 00:07:11.020 09:34:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:11.020 09:34:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58706 00:07:11.020 09:34:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.020 09:34:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58706 00:07:11.020 09:34:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58706 ']' 00:07:11.020 09:34:58 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.020 09:34:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.020 09:34:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.020 09:34:58 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.020 09:34:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.020 [2024-11-19 09:34:58.531114] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:11.020 [2024-11-19 09:34:58.531285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58706 ] 00:07:11.279 [2024-11-19 09:34:58.683043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.279 [2024-11-19 09:34:58.750408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.279 [2024-11-19 09:34:58.829554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.214 09:34:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.214 09:34:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:12.214 09:34:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58706 00:07:12.214 09:34:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58706 00:07:12.214 09:34:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.473 09:34:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58706 00:07:12.473 09:34:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58706 ']' 00:07:12.473 09:34:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58706 00:07:12.473 09:34:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:12.473 09:34:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.473 09:34:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58706 00:07:12.473 killing process with pid 58706 00:07:12.473 09:34:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.473 09:34:59 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.473 09:34:59 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58706' 00:07:12.473 09:34:59 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58706 00:07:12.473 09:34:59 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58706 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58706 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58706 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:12.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.733 ERROR: process (pid: 58706) is no longer running 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58706 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58706 ']' 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.733 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58706) - No such process 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:12.733 00:07:12.733 real 0m1.870s 00:07:12.733 user 0m2.056s 00:07:12.733 sys 0m0.551s 00:07:12.733 ************************************ 00:07:12.733 END TEST default_locks 00:07:12.733 ************************************ 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.733 09:35:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.733 09:35:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:12.733 09:35:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.733 09:35:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.733 09:35:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.993 ************************************ 00:07:12.993 START TEST default_locks_via_rpc 00:07:12.993 ************************************ 00:07:12.993 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:12.993 09:35:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58752 00:07:12.993 09:35:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.993 09:35:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58752 00:07:12.993 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58752 ']' 00:07:12.993 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.993 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.993 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.993 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.993 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.993 [2024-11-19 09:35:00.439164] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:12.993 [2024-11-19 09:35:00.439310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58752 ] 00:07:12.993 [2024-11-19 09:35:00.588518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.252 [2024-11-19 09:35:00.653449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.252 [2024-11-19 09:35:00.729651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58752 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58752 00:07:13.512 09:35:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.770 09:35:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58752 00:07:13.771 09:35:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58752 ']' 00:07:13.771 09:35:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58752 00:07:13.771 09:35:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:14.029 09:35:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.029 09:35:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58752 00:07:14.029 killing process with pid 58752 00:07:14.029 09:35:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.029 09:35:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.029 09:35:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58752' 00:07:14.029 09:35:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58752 00:07:14.029 09:35:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58752 00:07:14.288 ************************************ 00:07:14.288 END TEST default_locks_via_rpc 00:07:14.288 ************************************ 00:07:14.288 00:07:14.288 real 0m1.471s 00:07:14.288 user 0m1.440s 00:07:14.288 sys 0m0.542s 00:07:14.288 09:35:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.288 09:35:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.288 09:35:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:14.288 09:35:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.288 09:35:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.288 09:35:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.288 ************************************ 00:07:14.288 START TEST non_locking_app_on_locked_coremask 00:07:14.288 ************************************ 00:07:14.288 09:35:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:14.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.288 09:35:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58796 00:07:14.288 09:35:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58796 /var/tmp/spdk.sock 00:07:14.288 09:35:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.288 09:35:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58796 ']' 00:07:14.288 09:35:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.288 09:35:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.288 09:35:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.288 09:35:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.288 09:35:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.547 [2024-11-19 09:35:01.947364] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:14.547 [2024-11-19 09:35:01.947458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58796 ] 00:07:14.547 [2024-11-19 09:35:02.096513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.807 [2024-11-19 09:35:02.170990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.807 [2024-11-19 09:35:02.256712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.375 09:35:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.375 09:35:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:15.376 09:35:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58812 00:07:15.376 09:35:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:15.376 09:35:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58812 /var/tmp/spdk2.sock 00:07:15.376 09:35:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58812 ']' 00:07:15.376 09:35:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.376 09:35:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.376 09:35:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.376 09:35:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.376 09:35:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.634 [2024-11-19 09:35:03.030759] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:15.634 [2024-11-19 09:35:03.032322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58812 ] 00:07:15.634 [2024-11-19 09:35:03.197941] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.634 [2024-11-19 09:35:03.198007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.894 [2024-11-19 09:35:03.327378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.894 [2024-11-19 09:35:03.487153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.463 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.463 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:16.463 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58796 00:07:16.463 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.463 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58796 00:07:17.401 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58796 00:07:17.401 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58796 ']' 00:07:17.401 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58796 00:07:17.401 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:17.401 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.401 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58796 00:07:17.401 killing process with pid 58796 00:07:17.401 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.401 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.401 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58796' 00:07:17.401 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58796 00:07:17.401 09:35:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58796 00:07:18.339 09:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58812 00:07:18.339 09:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58812 ']' 00:07:18.339 09:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58812 00:07:18.339 09:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:18.339 09:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.339 09:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58812 00:07:18.339 killing process with pid 58812 00:07:18.339 09:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.339 09:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.339 09:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58812' 00:07:18.339 09:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58812 00:07:18.339 09:35:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58812 00:07:18.603 00:07:18.603 real 0m4.282s 00:07:18.603 user 0m4.793s 00:07:18.603 sys 0m1.173s 00:07:18.603 09:35:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.603 ************************************ 00:07:18.603 END TEST non_locking_app_on_locked_coremask 00:07:18.603 ************************************ 00:07:18.603 09:35:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.603 09:35:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:18.603 09:35:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.603 09:35:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.603 09:35:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.603 ************************************ 00:07:18.603 START TEST locking_app_on_unlocked_coremask 00:07:18.603 ************************************ 00:07:18.603 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:18.603 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58879 00:07:18.603 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:18.603 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58879 /var/tmp/spdk.sock 00:07:18.603 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58879 ']' 00:07:18.603 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.603 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.603 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.603 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.603 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.862 [2024-11-19 09:35:06.290838] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:18.862 [2024-11-19 09:35:06.291332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58879 ] 00:07:18.862 [2024-11-19 09:35:06.442582] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.862 [2024-11-19 09:35:06.442945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.121 [2024-11-19 09:35:06.507181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.121 [2024-11-19 09:35:06.587081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.380 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.380 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:19.380 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58893 00:07:19.380 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58893 /var/tmp/spdk2.sock 00:07:19.380 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:19.380 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58893 ']' 00:07:19.380 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.380 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.380 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.380 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.380 09:35:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.380 [2024-11-19 09:35:06.870388] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:19.380 [2024-11-19 09:35:06.870505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58893 ] 00:07:19.639 [2024-11-19 09:35:07.034933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.639 [2024-11-19 09:35:07.166794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.899 [2024-11-19 09:35:07.327455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.468 09:35:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.468 09:35:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:20.468 09:35:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58893 00:07:20.468 09:35:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58893 00:07:20.468 09:35:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.406 09:35:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58879 00:07:21.406 09:35:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58879 ']' 00:07:21.406 09:35:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58879 00:07:21.406 09:35:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:21.406 09:35:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.406 09:35:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58879 00:07:21.406 killing process with pid 58879 00:07:21.406 09:35:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.406 09:35:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.406 09:35:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58879' 00:07:21.406 09:35:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58879 00:07:21.406 09:35:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58879 00:07:22.345 09:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58893 00:07:22.345 09:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58893 ']' 00:07:22.345 09:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58893 00:07:22.345 09:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:22.345 09:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.345 09:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58893 00:07:22.345 killing process with pid 58893 00:07:22.345 09:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.345 09:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.345 09:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58893' 00:07:22.345 09:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58893 00:07:22.345 09:35:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58893 00:07:22.615 ************************************ 00:07:22.615 END TEST locking_app_on_unlocked_coremask 00:07:22.615 ************************************ 00:07:22.615 00:07:22.615 real 0m3.850s 00:07:22.615 user 0m4.270s 00:07:22.615 sys 0m1.121s 00:07:22.615 09:35:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.615 09:35:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.615 09:35:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:22.615 09:35:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.615 09:35:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.615 09:35:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.615 ************************************ 00:07:22.615 START TEST locking_app_on_locked_coremask 00:07:22.615 ************************************ 00:07:22.615 09:35:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:22.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.615 09:35:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58960 00:07:22.615 09:35:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58960 /var/tmp/spdk.sock 00:07:22.615 09:35:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.615 09:35:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58960 ']' 00:07:22.615 09:35:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.615 09:35:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.615 09:35:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.615 09:35:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.615 09:35:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.615 [2024-11-19 09:35:10.204956] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:22.615 [2024-11-19 09:35:10.205454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58960 ] 00:07:22.875 [2024-11-19 09:35:10.362998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.875 [2024-11-19 09:35:10.429640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.133 [2024-11-19 09:35:10.505105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58976 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58976 /var/tmp/spdk2.sock 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58976 /var/tmp/spdk2.sock 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58976 /var/tmp/spdk2.sock 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58976 ']' 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.700 09:35:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.700 [2024-11-19 09:35:11.297193] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:23.700 [2024-11-19 09:35:11.297769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58976 ] 00:07:23.959 [2024-11-19 09:35:11.457170] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58960 has claimed it. 00:07:23.959 [2024-11-19 09:35:11.461304] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:24.527 ERROR: process (pid: 58976) is no longer running 00:07:24.527 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58976) - No such process 00:07:24.527 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.527 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:24.527 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:24.527 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.527 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.527 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.527 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58960 00:07:24.527 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58960 00:07:24.527 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.785 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58960 00:07:24.785 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58960 ']' 00:07:24.785 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58960 00:07:24.785 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:24.785 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.785 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58960 00:07:25.044 killing process with pid 58960 00:07:25.044 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.044 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.044 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58960' 00:07:25.044 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58960 00:07:25.044 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58960 00:07:25.303 00:07:25.303 real 0m2.680s 00:07:25.303 user 0m3.172s 00:07:25.303 sys 0m0.636s 00:07:25.303 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.303 ************************************ 00:07:25.303 END TEST locking_app_on_locked_coremask 00:07:25.303 09:35:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.303 ************************************ 00:07:25.303 09:35:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:25.303 09:35:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.303 09:35:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.303 09:35:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.303 ************************************ 00:07:25.303 START TEST locking_overlapped_coremask 00:07:25.303 ************************************ 00:07:25.303 09:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:25.303 09:35:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59027 00:07:25.303 09:35:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:25.303 09:35:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59027 /var/tmp/spdk.sock 00:07:25.303 09:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59027 ']' 00:07:25.303 09:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.303 09:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.303 09:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.303 09:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.303 09:35:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.303 [2024-11-19 09:35:12.911560] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:25.303 [2024-11-19 09:35:12.911805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59027 ] 00:07:25.571 [2024-11-19 09:35:13.053589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.571 [2024-11-19 09:35:13.115991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.571 [2024-11-19 09:35:13.116117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.571 [2024-11-19 09:35:13.116114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.571 [2024-11-19 09:35:13.189217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59032 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59032 /var/tmp/spdk2.sock 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59032 /var/tmp/spdk2.sock 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59032 /var/tmp/spdk2.sock 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59032 ']' 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.842 09:35:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.842 [2024-11-19 09:35:13.458591] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:25.842 [2024-11-19 09:35:13.459081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59032 ] 00:07:26.101 [2024-11-19 09:35:13.625593] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59027 has claimed it. 00:07:26.101 [2024-11-19 09:35:13.625642] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:26.670 ERROR: process (pid: 59032) is no longer running 00:07:26.670 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59032) - No such process 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59027 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59027 ']' 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59027 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59027 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59027' 00:07:26.670 killing process with pid 59027 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59027 00:07:26.670 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59027 00:07:27.236 00:07:27.236 real 0m1.761s 00:07:27.236 user 0m4.872s 00:07:27.236 sys 0m0.411s 00:07:27.236 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.236 09:35:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.236 ************************************ 00:07:27.236 END TEST locking_overlapped_coremask 00:07:27.236 ************************************ 00:07:27.236 09:35:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:27.236 09:35:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.236 09:35:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.236 09:35:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.236 ************************************ 00:07:27.236 START TEST locking_overlapped_coremask_via_rpc 00:07:27.236 ************************************ 00:07:27.236 09:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:27.236 09:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59076 00:07:27.236 09:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:27.236 09:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59076 /var/tmp/spdk.sock 00:07:27.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.236 09:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59076 ']' 00:07:27.236 09:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.236 09:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.236 09:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.236 09:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.236 09:35:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.236 [2024-11-19 09:35:14.742843] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:27.236 [2024-11-19 09:35:14.742967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59076 ] 00:07:27.495 [2024-11-19 09:35:14.898355] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:27.495 [2024-11-19 09:35:14.898393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.495 [2024-11-19 09:35:14.954686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.495 [2024-11-19 09:35:14.954811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.495 [2024-11-19 09:35:14.954815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.495 [2024-11-19 09:35:15.023452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.754 09:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.754 09:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:27.754 09:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59088 00:07:27.754 09:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59088 /var/tmp/spdk2.sock 00:07:27.754 09:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:27.754 09:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59088 ']' 00:07:27.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.754 09:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.754 09:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.754 09:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.754 09:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.754 09:35:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.754 [2024-11-19 09:35:15.298386] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:27.754 [2024-11-19 09:35:15.298503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59088 ] 00:07:28.013 [2024-11-19 09:35:15.461114] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:28.013 [2024-11-19 09:35:15.461181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.013 [2024-11-19 09:35:15.594119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.013 [2024-11-19 09:35:15.594258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.013 [2024-11-19 09:35:15.594258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:28.271 [2024-11-19 09:35:15.736692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.839 [2024-11-19 09:35:16.369365] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59076 has claimed it. 00:07:28.839 request: 00:07:28.839 { 00:07:28.839 "method": "framework_enable_cpumask_locks", 00:07:28.839 "req_id": 1 00:07:28.839 } 00:07:28.839 Got JSON-RPC error response 00:07:28.839 response: 00:07:28.839 { 00:07:28.839 "code": -32603, 00:07:28.839 "message": "Failed to claim CPU core: 2" 00:07:28.839 } 00:07:28.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.839 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.840 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59076 /var/tmp/spdk.sock 00:07:28.840 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59076 ']' 00:07:28.840 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.840 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.840 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.840 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.840 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.099 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.099 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:29.099 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59088 /var/tmp/spdk2.sock 00:07:29.099 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59088 ']' 00:07:29.099 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.099 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.099 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.099 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.099 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.357 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.357 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:29.357 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:29.357 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:29.357 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:29.357 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:29.357 ************************************ 00:07:29.357 END TEST locking_overlapped_coremask_via_rpc 00:07:29.357 ************************************ 00:07:29.357 00:07:29.357 real 0m2.300s 00:07:29.357 user 0m1.313s 00:07:29.357 sys 0m0.181s 00:07:29.357 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.357 09:35:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.617 09:35:17 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:29.617 09:35:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59076 ]] 00:07:29.617 09:35:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59076 00:07:29.617 09:35:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59076 ']' 00:07:29.617 09:35:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59076 00:07:29.617 09:35:17 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:29.617 09:35:17 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.617 09:35:17 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59076 00:07:29.617 killing process with pid 59076 00:07:29.617 09:35:17 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.617 09:35:17 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.617 09:35:17 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59076' 00:07:29.617 09:35:17 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59076 00:07:29.617 09:35:17 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59076 00:07:29.877 09:35:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59088 ]] 00:07:29.877 09:35:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59088 00:07:29.877 09:35:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59088 ']' 00:07:29.877 09:35:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59088 00:07:29.877 09:35:17 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:29.877 09:35:17 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.877 09:35:17 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59088 00:07:29.877 killing process with pid 59088 00:07:29.877 09:35:17 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:29.877 09:35:17 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:29.877 09:35:17 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59088' 00:07:29.877 09:35:17 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59088 00:07:29.877 09:35:17 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59088 00:07:30.446 09:35:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:30.446 09:35:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:30.446 09:35:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59076 ]] 00:07:30.446 09:35:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59076 00:07:30.446 09:35:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59076 ']' 00:07:30.446 09:35:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59076 00:07:30.446 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59076) - No such process 00:07:30.446 Process with pid 59076 is not found 00:07:30.446 09:35:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59076 is not found' 00:07:30.446 09:35:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59088 ]] 00:07:30.446 09:35:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59088 00:07:30.446 09:35:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59088 ']' 00:07:30.446 09:35:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59088 00:07:30.446 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59088) - No such process 00:07:30.446 Process with pid 59088 is not found 00:07:30.446 09:35:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59088 is not found' 00:07:30.446 09:35:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:30.446 ************************************ 00:07:30.446 END TEST cpu_locks 00:07:30.446 ************************************ 00:07:30.446 00:07:30.446 real 0m19.587s 00:07:30.446 user 0m33.820s 00:07:30.446 sys 0m5.529s 00:07:30.446 09:35:17 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.446 09:35:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.446 ************************************ 00:07:30.446 END TEST event 00:07:30.446 ************************************ 00:07:30.446 00:07:30.446 real 0m47.052s 00:07:30.446 user 1m32.199s 00:07:30.446 sys 0m9.283s 00:07:30.446 09:35:17 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.446 09:35:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.446 09:35:17 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:30.446 09:35:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.446 09:35:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.446 09:35:17 -- common/autotest_common.sh@10 -- # set +x 00:07:30.446 ************************************ 00:07:30.446 START TEST thread 00:07:30.446 ************************************ 00:07:30.446 09:35:17 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:30.446 * Looking for test storage... 00:07:30.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:30.446 09:35:17 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.446 09:35:17 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.446 09:35:17 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.706 09:35:18 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.706 09:35:18 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.706 09:35:18 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.706 09:35:18 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.706 09:35:18 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.706 09:35:18 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.706 09:35:18 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.706 09:35:18 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.706 09:35:18 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.706 09:35:18 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.706 09:35:18 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.706 09:35:18 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.706 09:35:18 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:30.706 09:35:18 thread -- scripts/common.sh@345 -- # : 1 00:07:30.706 09:35:18 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.706 09:35:18 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.706 09:35:18 thread -- scripts/common.sh@365 -- # decimal 1 00:07:30.706 09:35:18 thread -- scripts/common.sh@353 -- # local d=1 00:07:30.706 09:35:18 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.706 09:35:18 thread -- scripts/common.sh@355 -- # echo 1 00:07:30.706 09:35:18 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.706 09:35:18 thread -- scripts/common.sh@366 -- # decimal 2 00:07:30.706 09:35:18 thread -- scripts/common.sh@353 -- # local d=2 00:07:30.706 09:35:18 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.706 09:35:18 thread -- scripts/common.sh@355 -- # echo 2 00:07:30.706 09:35:18 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.706 09:35:18 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.706 09:35:18 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.706 09:35:18 thread -- scripts/common.sh@368 -- # return 0 00:07:30.706 09:35:18 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.706 09:35:18 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.706 --rc genhtml_branch_coverage=1 00:07:30.706 --rc genhtml_function_coverage=1 00:07:30.706 --rc genhtml_legend=1 00:07:30.706 --rc geninfo_all_blocks=1 00:07:30.706 --rc geninfo_unexecuted_blocks=1 00:07:30.706 00:07:30.706 ' 00:07:30.706 09:35:18 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.706 --rc genhtml_branch_coverage=1 00:07:30.706 --rc genhtml_function_coverage=1 00:07:30.706 --rc genhtml_legend=1 00:07:30.706 --rc geninfo_all_blocks=1 00:07:30.706 --rc geninfo_unexecuted_blocks=1 00:07:30.706 00:07:30.706 ' 00:07:30.706 09:35:18 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.706 --rc genhtml_branch_coverage=1 00:07:30.706 --rc genhtml_function_coverage=1 00:07:30.706 --rc genhtml_legend=1 00:07:30.706 --rc geninfo_all_blocks=1 00:07:30.706 --rc geninfo_unexecuted_blocks=1 00:07:30.706 00:07:30.706 ' 00:07:30.706 09:35:18 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.706 --rc genhtml_branch_coverage=1 00:07:30.706 --rc genhtml_function_coverage=1 00:07:30.706 --rc genhtml_legend=1 00:07:30.706 --rc geninfo_all_blocks=1 00:07:30.706 --rc geninfo_unexecuted_blocks=1 00:07:30.706 00:07:30.706 ' 00:07:30.706 09:35:18 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:30.706 09:35:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:30.706 09:35:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.706 09:35:18 thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.706 ************************************ 00:07:30.706 START TEST thread_poller_perf 00:07:30.706 ************************************ 00:07:30.706 09:35:18 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:30.706 [2024-11-19 09:35:18.115635] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:30.706 [2024-11-19 09:35:18.115900] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59224 ] 00:07:30.706 [2024-11-19 09:35:18.258911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.707 [2024-11-19 09:35:18.311589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.707 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:32.081 [2024-11-19T09:35:19.704Z] ====================================== 00:07:32.081 [2024-11-19T09:35:19.704Z] busy:2205557374 (cyc) 00:07:32.081 [2024-11-19T09:35:19.704Z] total_run_count: 346000 00:07:32.081 [2024-11-19T09:35:19.704Z] tsc_hz: 2200000000 (cyc) 00:07:32.081 [2024-11-19T09:35:19.704Z] ====================================== 00:07:32.081 [2024-11-19T09:35:19.704Z] poller_cost: 6374 (cyc), 2897 (nsec) 00:07:32.081 00:07:32.081 ************************************ 00:07:32.081 END TEST thread_poller_perf 00:07:32.081 ************************************ 00:07:32.081 real 0m1.264s 00:07:32.081 user 0m1.110s 00:07:32.081 sys 0m0.047s 00:07:32.081 09:35:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.081 09:35:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:32.081 09:35:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:32.081 09:35:19 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:32.081 09:35:19 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.081 09:35:19 thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.081 ************************************ 00:07:32.081 START TEST thread_poller_perf 00:07:32.081 ************************************ 00:07:32.081 09:35:19 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:32.081 [2024-11-19 09:35:19.432315] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:32.081 [2024-11-19 09:35:19.432426] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59254 ] 00:07:32.081 [2024-11-19 09:35:19.574041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.081 [2024-11-19 09:35:19.625445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.081 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:33.458 [2024-11-19T09:35:21.081Z] ====================================== 00:07:33.458 [2024-11-19T09:35:21.081Z] busy:2202042960 (cyc) 00:07:33.458 [2024-11-19T09:35:21.081Z] total_run_count: 4634000 00:07:33.458 [2024-11-19T09:35:21.081Z] tsc_hz: 2200000000 (cyc) 00:07:33.458 [2024-11-19T09:35:21.081Z] ====================================== 00:07:33.458 [2024-11-19T09:35:21.081Z] poller_cost: 475 (cyc), 215 (nsec) 00:07:33.458 00:07:33.458 real 0m1.256s 00:07:33.458 user 0m1.109s 00:07:33.458 sys 0m0.039s 00:07:33.458 ************************************ 00:07:33.458 END TEST thread_poller_perf 00:07:33.458 ************************************ 00:07:33.458 09:35:20 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.458 09:35:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:33.458 09:35:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:33.458 ************************************ 00:07:33.458 END TEST thread 00:07:33.458 ************************************ 00:07:33.458 00:07:33.458 real 0m2.798s 00:07:33.458 user 0m2.362s 00:07:33.458 sys 0m0.217s 00:07:33.458 09:35:20 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.458 09:35:20 thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.458 09:35:20 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:33.458 09:35:20 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:33.458 09:35:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.458 09:35:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.458 09:35:20 -- common/autotest_common.sh@10 -- # set +x 00:07:33.458 ************************************ 00:07:33.458 START TEST app_cmdline 00:07:33.458 ************************************ 00:07:33.458 09:35:20 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:33.458 * Looking for test storage... 00:07:33.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:33.458 09:35:20 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:33.458 09:35:20 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:33.458 09:35:20 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:33.458 09:35:20 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:33.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.458 09:35:20 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:33.458 09:35:20 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.458 09:35:20 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:33.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.458 --rc genhtml_branch_coverage=1 00:07:33.458 --rc genhtml_function_coverage=1 00:07:33.458 --rc genhtml_legend=1 00:07:33.458 --rc geninfo_all_blocks=1 00:07:33.458 --rc geninfo_unexecuted_blocks=1 00:07:33.458 00:07:33.458 ' 00:07:33.458 09:35:20 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:33.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.458 --rc genhtml_branch_coverage=1 00:07:33.458 --rc genhtml_function_coverage=1 00:07:33.458 --rc genhtml_legend=1 00:07:33.458 --rc geninfo_all_blocks=1 00:07:33.458 --rc geninfo_unexecuted_blocks=1 00:07:33.458 00:07:33.458 ' 00:07:33.458 09:35:20 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:33.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.458 --rc genhtml_branch_coverage=1 00:07:33.458 --rc genhtml_function_coverage=1 00:07:33.458 --rc genhtml_legend=1 00:07:33.458 --rc geninfo_all_blocks=1 00:07:33.458 --rc geninfo_unexecuted_blocks=1 00:07:33.458 00:07:33.458 ' 00:07:33.458 09:35:20 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:33.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.458 --rc genhtml_branch_coverage=1 00:07:33.458 --rc genhtml_function_coverage=1 00:07:33.459 --rc genhtml_legend=1 00:07:33.459 --rc geninfo_all_blocks=1 00:07:33.459 --rc geninfo_unexecuted_blocks=1 00:07:33.459 00:07:33.459 ' 00:07:33.459 09:35:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:33.459 09:35:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59342 00:07:33.459 09:35:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59342 00:07:33.459 09:35:20 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59342 ']' 00:07:33.459 09:35:20 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:33.459 09:35:20 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.459 09:35:20 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.459 09:35:20 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.459 09:35:20 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.459 09:35:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:33.459 [2024-11-19 09:35:21.007607] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:33.459 [2024-11-19 09:35:21.007858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59342 ] 00:07:33.717 [2024-11-19 09:35:21.148706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.717 [2024-11-19 09:35:21.196973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.717 [2024-11-19 09:35:21.270877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.976 09:35:21 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.976 09:35:21 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:33.976 09:35:21 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:34.235 { 00:07:34.235 "version": "SPDK v25.01-pre git sha1 53ca6a885", 00:07:34.235 "fields": { 00:07:34.235 "major": 25, 00:07:34.235 "minor": 1, 00:07:34.235 "patch": 0, 00:07:34.235 "suffix": "-pre", 00:07:34.235 "commit": "53ca6a885" 00:07:34.235 } 00:07:34.235 } 00:07:34.235 09:35:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:34.235 09:35:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:34.235 09:35:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:34.235 09:35:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:34.235 09:35:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:34.235 09:35:21 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.235 09:35:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:34.235 09:35:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:34.235 09:35:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:34.235 09:35:21 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.235 09:35:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:34.235 09:35:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:34.235 09:35:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.235 09:35:21 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:34.235 09:35:21 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.235 09:35:21 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.235 09:35:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.235 09:35:21 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.235 09:35:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.235 09:35:21 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.235 09:35:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.235 09:35:21 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.235 09:35:21 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:34.235 09:35:21 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.802 request: 00:07:34.802 { 00:07:34.802 "method": "env_dpdk_get_mem_stats", 00:07:34.802 "req_id": 1 00:07:34.802 } 00:07:34.802 Got JSON-RPC error response 00:07:34.802 response: 00:07:34.802 { 00:07:34.802 "code": -32601, 00:07:34.802 "message": "Method not found" 00:07:34.802 } 00:07:34.802 09:35:22 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:34.802 09:35:22 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.802 09:35:22 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:34.802 09:35:22 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.802 09:35:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59342 00:07:34.802 09:35:22 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59342 ']' 00:07:34.802 09:35:22 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59342 00:07:34.802 09:35:22 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:34.802 09:35:22 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.802 09:35:22 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59342 00:07:34.802 killing process with pid 59342 00:07:34.802 09:35:22 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.802 09:35:22 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.802 09:35:22 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59342' 00:07:34.802 09:35:22 app_cmdline -- common/autotest_common.sh@973 -- # kill 59342 00:07:34.802 09:35:22 app_cmdline -- common/autotest_common.sh@978 -- # wait 59342 00:07:35.061 ************************************ 00:07:35.061 END TEST app_cmdline 00:07:35.061 ************************************ 00:07:35.061 00:07:35.061 real 0m1.793s 00:07:35.061 user 0m2.205s 00:07:35.061 sys 0m0.461s 00:07:35.061 09:35:22 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.061 09:35:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.061 09:35:22 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:35.061 09:35:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.061 09:35:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.061 09:35:22 -- common/autotest_common.sh@10 -- # set +x 00:07:35.061 ************************************ 00:07:35.061 START TEST version 00:07:35.061 ************************************ 00:07:35.061 09:35:22 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:35.061 * Looking for test storage... 00:07:35.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:35.061 09:35:22 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:35.320 09:35:22 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:35.320 09:35:22 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:35.320 09:35:22 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:35.320 09:35:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.320 09:35:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.320 09:35:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.320 09:35:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.320 09:35:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.320 09:35:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.320 09:35:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.320 09:35:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.320 09:35:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.320 09:35:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.320 09:35:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.320 09:35:22 version -- scripts/common.sh@344 -- # case "$op" in 00:07:35.320 09:35:22 version -- scripts/common.sh@345 -- # : 1 00:07:35.320 09:35:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.320 09:35:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.320 09:35:22 version -- scripts/common.sh@365 -- # decimal 1 00:07:35.320 09:35:22 version -- scripts/common.sh@353 -- # local d=1 00:07:35.320 09:35:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.320 09:35:22 version -- scripts/common.sh@355 -- # echo 1 00:07:35.320 09:35:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.320 09:35:22 version -- scripts/common.sh@366 -- # decimal 2 00:07:35.321 09:35:22 version -- scripts/common.sh@353 -- # local d=2 00:07:35.321 09:35:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.321 09:35:22 version -- scripts/common.sh@355 -- # echo 2 00:07:35.321 09:35:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.321 09:35:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.321 09:35:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.321 09:35:22 version -- scripts/common.sh@368 -- # return 0 00:07:35.321 09:35:22 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.321 09:35:22 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:35.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.321 --rc genhtml_branch_coverage=1 00:07:35.321 --rc genhtml_function_coverage=1 00:07:35.321 --rc genhtml_legend=1 00:07:35.321 --rc geninfo_all_blocks=1 00:07:35.321 --rc geninfo_unexecuted_blocks=1 00:07:35.321 00:07:35.321 ' 00:07:35.321 09:35:22 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:35.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.321 --rc genhtml_branch_coverage=1 00:07:35.321 --rc genhtml_function_coverage=1 00:07:35.321 --rc genhtml_legend=1 00:07:35.321 --rc geninfo_all_blocks=1 00:07:35.321 --rc geninfo_unexecuted_blocks=1 00:07:35.321 00:07:35.321 ' 00:07:35.321 09:35:22 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:35.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.321 --rc genhtml_branch_coverage=1 00:07:35.321 --rc genhtml_function_coverage=1 00:07:35.321 --rc genhtml_legend=1 00:07:35.321 --rc geninfo_all_blocks=1 00:07:35.321 --rc geninfo_unexecuted_blocks=1 00:07:35.321 00:07:35.321 ' 00:07:35.321 09:35:22 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:35.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.321 --rc genhtml_branch_coverage=1 00:07:35.321 --rc genhtml_function_coverage=1 00:07:35.321 --rc genhtml_legend=1 00:07:35.321 --rc geninfo_all_blocks=1 00:07:35.321 --rc geninfo_unexecuted_blocks=1 00:07:35.321 00:07:35.321 ' 00:07:35.321 09:35:22 version -- app/version.sh@17 -- # get_header_version major 00:07:35.321 09:35:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.321 09:35:22 version -- app/version.sh@14 -- # cut -f2 00:07:35.321 09:35:22 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.321 09:35:22 version -- app/version.sh@17 -- # major=25 00:07:35.321 09:35:22 version -- app/version.sh@18 -- # get_header_version minor 00:07:35.321 09:35:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.321 09:35:22 version -- app/version.sh@14 -- # cut -f2 00:07:35.321 09:35:22 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.321 09:35:22 version -- app/version.sh@18 -- # minor=1 00:07:35.321 09:35:22 version -- app/version.sh@19 -- # get_header_version patch 00:07:35.321 09:35:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.321 09:35:22 version -- app/version.sh@14 -- # cut -f2 00:07:35.321 09:35:22 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.321 09:35:22 version -- app/version.sh@19 -- # patch=0 00:07:35.321 09:35:22 version -- app/version.sh@20 -- # get_header_version suffix 00:07:35.321 09:35:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.321 09:35:22 version -- app/version.sh@14 -- # cut -f2 00:07:35.321 09:35:22 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.321 09:35:22 version -- app/version.sh@20 -- # suffix=-pre 00:07:35.321 09:35:22 version -- app/version.sh@22 -- # version=25.1 00:07:35.321 09:35:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:35.321 09:35:22 version -- app/version.sh@28 -- # version=25.1rc0 00:07:35.321 09:35:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:35.321 09:35:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:35.321 09:35:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:35.321 09:35:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:35.321 00:07:35.321 real 0m0.247s 00:07:35.321 user 0m0.175s 00:07:35.321 sys 0m0.111s 00:07:35.321 ************************************ 00:07:35.321 END TEST version 00:07:35.321 ************************************ 00:07:35.321 09:35:22 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.321 09:35:22 version -- common/autotest_common.sh@10 -- # set +x 00:07:35.321 09:35:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:35.321 09:35:22 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:35.321 09:35:22 -- spdk/autotest.sh@194 -- # uname -s 00:07:35.321 09:35:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:35.321 09:35:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:35.321 09:35:22 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:35.321 09:35:22 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:35.321 09:35:22 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:35.321 09:35:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.321 09:35:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.321 09:35:22 -- common/autotest_common.sh@10 -- # set +x 00:07:35.321 ************************************ 00:07:35.321 START TEST spdk_dd 00:07:35.321 ************************************ 00:07:35.321 09:35:22 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:35.580 * Looking for test storage... 00:07:35.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:35.580 09:35:22 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:35.580 09:35:22 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:07:35.580 09:35:22 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:35.580 09:35:23 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:35.580 09:35:23 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:35.581 09:35:23 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.581 09:35:23 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:35.581 09:35:23 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.581 09:35:23 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.581 09:35:23 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.581 09:35:23 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:35.581 09:35:23 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.581 09:35:23 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:35.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.581 --rc genhtml_branch_coverage=1 00:07:35.581 --rc genhtml_function_coverage=1 00:07:35.581 --rc genhtml_legend=1 00:07:35.581 --rc geninfo_all_blocks=1 00:07:35.581 --rc geninfo_unexecuted_blocks=1 00:07:35.581 00:07:35.581 ' 00:07:35.581 09:35:23 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:35.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.581 --rc genhtml_branch_coverage=1 00:07:35.581 --rc genhtml_function_coverage=1 00:07:35.581 --rc genhtml_legend=1 00:07:35.581 --rc geninfo_all_blocks=1 00:07:35.581 --rc geninfo_unexecuted_blocks=1 00:07:35.581 00:07:35.581 ' 00:07:35.581 09:35:23 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:35.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.581 --rc genhtml_branch_coverage=1 00:07:35.581 --rc genhtml_function_coverage=1 00:07:35.581 --rc genhtml_legend=1 00:07:35.581 --rc geninfo_all_blocks=1 00:07:35.581 --rc geninfo_unexecuted_blocks=1 00:07:35.581 00:07:35.581 ' 00:07:35.581 09:35:23 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:35.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.581 --rc genhtml_branch_coverage=1 00:07:35.581 --rc genhtml_function_coverage=1 00:07:35.581 --rc genhtml_legend=1 00:07:35.581 --rc geninfo_all_blocks=1 00:07:35.581 --rc geninfo_unexecuted_blocks=1 00:07:35.581 00:07:35.581 ' 00:07:35.581 09:35:23 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.581 09:35:23 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.581 09:35:23 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.581 09:35:23 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.581 09:35:23 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.581 09:35:23 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.581 09:35:23 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.581 09:35:23 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.581 09:35:23 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:35.581 09:35:23 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.581 09:35:23 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:35.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:35.840 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:35.840 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:36.100 09:35:23 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:36.100 09:35:23 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:36.100 09:35:23 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:36.100 09:35:23 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:36.100 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.101 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:36.102 * spdk_dd linked to liburing 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:36.102 09:35:23 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:36.102 09:35:23 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:36.102 09:35:23 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:36.102 09:35:23 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:36.102 09:35:23 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:36.102 09:35:23 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.102 09:35:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:36.102 ************************************ 00:07:36.102 START TEST spdk_dd_basic_rw 00:07:36.102 ************************************ 00:07:36.102 09:35:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:36.102 * Looking for test storage... 00:07:36.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:36.102 09:35:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:36.102 09:35:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:07:36.102 09:35:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:36.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.362 --rc genhtml_branch_coverage=1 00:07:36.362 --rc genhtml_function_coverage=1 00:07:36.362 --rc genhtml_legend=1 00:07:36.362 --rc geninfo_all_blocks=1 00:07:36.362 --rc geninfo_unexecuted_blocks=1 00:07:36.362 00:07:36.362 ' 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:36.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.362 --rc genhtml_branch_coverage=1 00:07:36.362 --rc genhtml_function_coverage=1 00:07:36.362 --rc genhtml_legend=1 00:07:36.362 --rc geninfo_all_blocks=1 00:07:36.362 --rc geninfo_unexecuted_blocks=1 00:07:36.362 00:07:36.362 ' 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:36.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.362 --rc genhtml_branch_coverage=1 00:07:36.362 --rc genhtml_function_coverage=1 00:07:36.362 --rc genhtml_legend=1 00:07:36.362 --rc geninfo_all_blocks=1 00:07:36.362 --rc geninfo_unexecuted_blocks=1 00:07:36.362 00:07:36.362 ' 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:36.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.362 --rc genhtml_branch_coverage=1 00:07:36.362 --rc genhtml_function_coverage=1 00:07:36.362 --rc genhtml_legend=1 00:07:36.362 --rc geninfo_all_blocks=1 00:07:36.362 --rc geninfo_unexecuted_blocks=1 00:07:36.362 00:07:36.362 ' 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.362 09:35:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:36.363 09:35:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:36.624 09:35:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:36.624 09:35:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.625 ************************************ 00:07:36.625 START TEST dd_bs_lt_native_bs 00:07:36.625 ************************************ 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.625 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.625 { 00:07:36.625 "subsystems": [ 00:07:36.625 { 00:07:36.625 "subsystem": "bdev", 00:07:36.625 "config": [ 00:07:36.625 { 00:07:36.625 "params": { 00:07:36.625 "trtype": "pcie", 00:07:36.625 "traddr": "0000:00:10.0", 00:07:36.625 "name": "Nvme0" 00:07:36.625 }, 00:07:36.625 "method": "bdev_nvme_attach_controller" 00:07:36.625 }, 00:07:36.625 { 00:07:36.625 "method": "bdev_wait_for_examine" 00:07:36.625 } 00:07:36.625 ] 00:07:36.625 } 00:07:36.625 ] 00:07:36.625 } 00:07:36.625 [2024-11-19 09:35:24.100932] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:36.625 [2024-11-19 09:35:24.101044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59685 ] 00:07:36.884 [2024-11-19 09:35:24.255151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.884 [2024-11-19 09:35:24.315397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.884 [2024-11-19 09:35:24.379674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.884 [2024-11-19 09:35:24.494706] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:36.884 [2024-11-19 09:35:24.494788] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.144 [2024-11-19 09:35:24.627668] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.144 00:07:37.144 real 0m0.648s 00:07:37.144 user 0m0.424s 00:07:37.144 sys 0m0.175s 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:37.144 ************************************ 00:07:37.144 END TEST dd_bs_lt_native_bs 00:07:37.144 ************************************ 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.144 ************************************ 00:07:37.144 START TEST dd_rw 00:07:37.144 ************************************ 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:37.144 09:35:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.081 09:35:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:38.081 09:35:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:38.081 09:35:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.081 09:35:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.081 [2024-11-19 09:35:25.437171] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:38.081 [2024-11-19 09:35:25.437310] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59717 ] 00:07:38.081 { 00:07:38.081 "subsystems": [ 00:07:38.081 { 00:07:38.081 "subsystem": "bdev", 00:07:38.081 "config": [ 00:07:38.081 { 00:07:38.081 "params": { 00:07:38.081 "trtype": "pcie", 00:07:38.081 "traddr": "0000:00:10.0", 00:07:38.081 "name": "Nvme0" 00:07:38.081 }, 00:07:38.081 "method": "bdev_nvme_attach_controller" 00:07:38.081 }, 00:07:38.081 { 00:07:38.081 "method": "bdev_wait_for_examine" 00:07:38.081 } 00:07:38.081 ] 00:07:38.081 } 00:07:38.081 ] 00:07:38.081 } 00:07:38.081 [2024-11-19 09:35:25.580606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.081 [2024-11-19 09:35:25.655569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.341 [2024-11-19 09:35:25.713659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.341  [2024-11-19T09:35:26.223Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:38.600 00:07:38.600 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:38.600 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:38.600 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.600 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.600 [2024-11-19 09:35:26.061892] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:38.600 [2024-11-19 09:35:26.061994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59727 ] 00:07:38.600 { 00:07:38.600 "subsystems": [ 00:07:38.600 { 00:07:38.600 "subsystem": "bdev", 00:07:38.600 "config": [ 00:07:38.600 { 00:07:38.600 "params": { 00:07:38.600 "trtype": "pcie", 00:07:38.600 "traddr": "0000:00:10.0", 00:07:38.600 "name": "Nvme0" 00:07:38.600 }, 00:07:38.600 "method": "bdev_nvme_attach_controller" 00:07:38.600 }, 00:07:38.600 { 00:07:38.600 "method": "bdev_wait_for_examine" 00:07:38.600 } 00:07:38.600 ] 00:07:38.600 } 00:07:38.600 ] 00:07:38.600 } 00:07:38.600 [2024-11-19 09:35:26.209796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.859 [2024-11-19 09:35:26.261294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.859 [2024-11-19 09:35:26.318082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.859  [2024-11-19T09:35:26.741Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:39.118 00:07:39.118 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.118 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:39.118 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:39.118 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:39.118 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:39.118 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:39.118 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:39.118 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:39.118 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:39.118 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.118 09:35:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.118 [2024-11-19 09:35:26.675493] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:39.118 [2024-11-19 09:35:26.675582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59746 ] 00:07:39.118 { 00:07:39.118 "subsystems": [ 00:07:39.118 { 00:07:39.118 "subsystem": "bdev", 00:07:39.118 "config": [ 00:07:39.118 { 00:07:39.118 "params": { 00:07:39.118 "trtype": "pcie", 00:07:39.118 "traddr": "0000:00:10.0", 00:07:39.118 "name": "Nvme0" 00:07:39.118 }, 00:07:39.119 "method": "bdev_nvme_attach_controller" 00:07:39.119 }, 00:07:39.119 { 00:07:39.119 "method": "bdev_wait_for_examine" 00:07:39.119 } 00:07:39.119 ] 00:07:39.119 } 00:07:39.119 ] 00:07:39.119 } 00:07:39.404 [2024-11-19 09:35:26.812734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.404 [2024-11-19 09:35:26.872086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.404 [2024-11-19 09:35:26.928083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.663  [2024-11-19T09:35:27.286Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:39.663 00:07:39.663 09:35:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:39.663 09:35:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:39.663 09:35:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:39.663 09:35:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:39.663 09:35:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:39.663 09:35:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:39.663 09:35:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.230 09:35:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:40.230 09:35:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:40.230 09:35:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:40.230 09:35:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.488 [2024-11-19 09:35:27.895990] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:40.488 [2024-11-19 09:35:27.896101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59765 ] 00:07:40.488 { 00:07:40.488 "subsystems": [ 00:07:40.488 { 00:07:40.488 "subsystem": "bdev", 00:07:40.488 "config": [ 00:07:40.488 { 00:07:40.488 "params": { 00:07:40.488 "trtype": "pcie", 00:07:40.488 "traddr": "0000:00:10.0", 00:07:40.488 "name": "Nvme0" 00:07:40.488 }, 00:07:40.488 "method": "bdev_nvme_attach_controller" 00:07:40.488 }, 00:07:40.488 { 00:07:40.488 "method": "bdev_wait_for_examine" 00:07:40.488 } 00:07:40.489 ] 00:07:40.489 } 00:07:40.489 ] 00:07:40.489 } 00:07:40.489 [2024-11-19 09:35:28.041756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.489 [2024-11-19 09:35:28.105513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.747 [2024-11-19 09:35:28.159549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.747  [2024-11-19T09:35:28.629Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:41.006 00:07:41.006 09:35:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:41.006 09:35:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:41.006 09:35:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.006 09:35:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.006 { 00:07:41.006 "subsystems": [ 00:07:41.006 { 00:07:41.006 "subsystem": "bdev", 00:07:41.006 "config": [ 00:07:41.006 { 00:07:41.006 "params": { 00:07:41.006 "trtype": "pcie", 00:07:41.006 "traddr": "0000:00:10.0", 00:07:41.006 "name": "Nvme0" 00:07:41.006 }, 00:07:41.006 "method": "bdev_nvme_attach_controller" 00:07:41.006 }, 00:07:41.006 { 00:07:41.006 "method": "bdev_wait_for_examine" 00:07:41.006 } 00:07:41.006 ] 00:07:41.006 } 00:07:41.006 ] 00:07:41.006 } 00:07:41.006 [2024-11-19 09:35:28.548656] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:41.006 [2024-11-19 09:35:28.548765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59784 ] 00:07:41.264 [2024-11-19 09:35:28.696856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.264 [2024-11-19 09:35:28.755605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.264 [2024-11-19 09:35:28.810995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.522  [2024-11-19T09:35:29.145Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:41.522 00:07:41.522 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.522 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:41.522 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:41.522 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:41.522 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:41.522 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:41.522 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:41.522 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:41.522 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:41.522 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.522 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.779 [2024-11-19 09:35:29.159439] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:41.779 [2024-11-19 09:35:29.159518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59800 ] 00:07:41.779 { 00:07:41.779 "subsystems": [ 00:07:41.779 { 00:07:41.779 "subsystem": "bdev", 00:07:41.779 "config": [ 00:07:41.779 { 00:07:41.779 "params": { 00:07:41.779 "trtype": "pcie", 00:07:41.779 "traddr": "0000:00:10.0", 00:07:41.779 "name": "Nvme0" 00:07:41.779 }, 00:07:41.779 "method": "bdev_nvme_attach_controller" 00:07:41.779 }, 00:07:41.779 { 00:07:41.779 "method": "bdev_wait_for_examine" 00:07:41.779 } 00:07:41.779 ] 00:07:41.779 } 00:07:41.779 ] 00:07:41.779 } 00:07:41.779 [2024-11-19 09:35:29.303590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.779 [2024-11-19 09:35:29.358808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.084 [2024-11-19 09:35:29.418372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.084  [2024-11-19T09:35:29.967Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:42.344 00:07:42.344 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:42.344 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:42.344 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:42.344 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:42.344 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:42.344 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:42.344 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:42.344 09:35:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.911 09:35:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:42.911 09:35:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:42.911 09:35:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:42.911 09:35:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.911 { 00:07:42.911 "subsystems": [ 00:07:42.911 { 00:07:42.911 "subsystem": "bdev", 00:07:42.911 "config": [ 00:07:42.911 { 00:07:42.911 "params": { 00:07:42.911 "trtype": "pcie", 00:07:42.911 "traddr": "0000:00:10.0", 00:07:42.911 "name": "Nvme0" 00:07:42.911 }, 00:07:42.911 "method": "bdev_nvme_attach_controller" 00:07:42.911 }, 00:07:42.911 { 00:07:42.911 "method": "bdev_wait_for_examine" 00:07:42.911 } 00:07:42.911 ] 00:07:42.911 } 00:07:42.911 ] 00:07:42.911 } 00:07:42.911 [2024-11-19 09:35:30.442560] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:42.911 [2024-11-19 09:35:30.443282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59824 ] 00:07:43.171 [2024-11-19 09:35:30.592473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.171 [2024-11-19 09:35:30.651030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.171 [2024-11-19 09:35:30.707981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.429  [2024-11-19T09:35:31.052Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:43.429 00:07:43.429 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:43.429 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:43.429 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.429 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.689 [2024-11-19 09:35:31.078740] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:43.689 [2024-11-19 09:35:31.078855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59832 ] 00:07:43.689 { 00:07:43.689 "subsystems": [ 00:07:43.689 { 00:07:43.689 "subsystem": "bdev", 00:07:43.689 "config": [ 00:07:43.689 { 00:07:43.689 "params": { 00:07:43.689 "trtype": "pcie", 00:07:43.689 "traddr": "0000:00:10.0", 00:07:43.689 "name": "Nvme0" 00:07:43.689 }, 00:07:43.689 "method": "bdev_nvme_attach_controller" 00:07:43.689 }, 00:07:43.689 { 00:07:43.689 "method": "bdev_wait_for_examine" 00:07:43.689 } 00:07:43.689 ] 00:07:43.689 } 00:07:43.689 ] 00:07:43.689 } 00:07:43.689 [2024-11-19 09:35:31.227098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.689 [2024-11-19 09:35:31.280669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.949 [2024-11-19 09:35:31.339775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.949  [2024-11-19T09:35:31.831Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:44.208 00:07:44.208 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.208 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:44.208 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:44.208 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:44.208 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:44.208 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:44.208 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:44.208 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:44.208 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:44.208 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:44.208 09:35:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.208 { 00:07:44.208 "subsystems": [ 00:07:44.208 { 00:07:44.208 "subsystem": "bdev", 00:07:44.208 "config": [ 00:07:44.208 { 00:07:44.208 "params": { 00:07:44.208 "trtype": "pcie", 00:07:44.208 "traddr": "0000:00:10.0", 00:07:44.208 "name": "Nvme0" 00:07:44.208 }, 00:07:44.208 "method": "bdev_nvme_attach_controller" 00:07:44.208 }, 00:07:44.208 { 00:07:44.208 "method": "bdev_wait_for_examine" 00:07:44.208 } 00:07:44.208 ] 00:07:44.208 } 00:07:44.208 ] 00:07:44.208 } 00:07:44.208 [2024-11-19 09:35:31.710501] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:44.208 [2024-11-19 09:35:31.710608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59853 ] 00:07:44.468 [2024-11-19 09:35:31.854854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.468 [2024-11-19 09:35:31.907885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.468 [2024-11-19 09:35:31.966256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.468  [2024-11-19T09:35:32.350Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:44.727 00:07:44.727 09:35:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:44.727 09:35:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:44.727 09:35:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:44.727 09:35:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:44.727 09:35:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:44.727 09:35:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:44.727 09:35:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.295 09:35:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:45.295 09:35:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:45.295 09:35:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:45.295 09:35:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.554 [2024-11-19 09:35:32.960344] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:45.554 [2024-11-19 09:35:32.960488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59872 ] 00:07:45.554 { 00:07:45.554 "subsystems": [ 00:07:45.554 { 00:07:45.554 "subsystem": "bdev", 00:07:45.554 "config": [ 00:07:45.554 { 00:07:45.554 "params": { 00:07:45.554 "trtype": "pcie", 00:07:45.554 "traddr": "0000:00:10.0", 00:07:45.554 "name": "Nvme0" 00:07:45.554 }, 00:07:45.554 "method": "bdev_nvme_attach_controller" 00:07:45.554 }, 00:07:45.554 { 00:07:45.554 "method": "bdev_wait_for_examine" 00:07:45.554 } 00:07:45.554 ] 00:07:45.554 } 00:07:45.554 ] 00:07:45.554 } 00:07:45.554 [2024-11-19 09:35:33.107767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.554 [2024-11-19 09:35:33.161584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.814 [2024-11-19 09:35:33.219756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.814  [2024-11-19T09:35:33.696Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:46.073 00:07:46.073 09:35:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:46.073 09:35:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:46.073 09:35:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.073 09:35:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.073 [2024-11-19 09:35:33.567695] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:46.073 [2024-11-19 09:35:33.567787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59891 ] 00:07:46.073 { 00:07:46.073 "subsystems": [ 00:07:46.073 { 00:07:46.073 "subsystem": "bdev", 00:07:46.073 "config": [ 00:07:46.073 { 00:07:46.073 "params": { 00:07:46.073 "trtype": "pcie", 00:07:46.073 "traddr": "0000:00:10.0", 00:07:46.073 "name": "Nvme0" 00:07:46.073 }, 00:07:46.073 "method": "bdev_nvme_attach_controller" 00:07:46.073 }, 00:07:46.073 { 00:07:46.073 "method": "bdev_wait_for_examine" 00:07:46.073 } 00:07:46.073 ] 00:07:46.073 } 00:07:46.073 ] 00:07:46.073 } 00:07:46.333 [2024-11-19 09:35:33.712944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.333 [2024-11-19 09:35:33.769785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.333 [2024-11-19 09:35:33.826134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.333  [2024-11-19T09:35:34.216Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:46.593 00:07:46.593 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.593 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:46.593 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:46.593 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:46.593 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:46.593 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:46.593 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:46.593 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:46.593 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:46.593 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.593 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.593 [2024-11-19 09:35:34.187369] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:46.593 [2024-11-19 09:35:34.187484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59901 ] 00:07:46.593 { 00:07:46.593 "subsystems": [ 00:07:46.593 { 00:07:46.593 "subsystem": "bdev", 00:07:46.593 "config": [ 00:07:46.593 { 00:07:46.593 "params": { 00:07:46.593 "trtype": "pcie", 00:07:46.593 "traddr": "0000:00:10.0", 00:07:46.593 "name": "Nvme0" 00:07:46.593 }, 00:07:46.593 "method": "bdev_nvme_attach_controller" 00:07:46.593 }, 00:07:46.593 { 00:07:46.593 "method": "bdev_wait_for_examine" 00:07:46.593 } 00:07:46.593 ] 00:07:46.593 } 00:07:46.593 ] 00:07:46.593 } 00:07:46.851 [2024-11-19 09:35:34.336031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.852 [2024-11-19 09:35:34.393303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.852 [2024-11-19 09:35:34.449227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.110  [2024-11-19T09:35:34.992Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:47.369 00:07:47.369 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:47.369 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:47.369 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:47.369 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:47.369 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:47.369 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:47.369 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:47.369 09:35:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.938 09:35:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:47.938 09:35:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:47.938 09:35:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:47.938 09:35:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.938 { 00:07:47.938 "subsystems": [ 00:07:47.938 { 00:07:47.938 "subsystem": "bdev", 00:07:47.938 "config": [ 00:07:47.938 { 00:07:47.938 "params": { 00:07:47.938 "trtype": "pcie", 00:07:47.938 "traddr": "0000:00:10.0", 00:07:47.938 "name": "Nvme0" 00:07:47.938 }, 00:07:47.938 "method": "bdev_nvme_attach_controller" 00:07:47.938 }, 00:07:47.938 { 00:07:47.938 "method": "bdev_wait_for_examine" 00:07:47.938 } 00:07:47.938 ] 00:07:47.938 } 00:07:47.938 ] 00:07:47.938 } 00:07:47.938 [2024-11-19 09:35:35.335712] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:47.938 [2024-11-19 09:35:35.335800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59926 ] 00:07:47.938 [2024-11-19 09:35:35.479181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.938 [2024-11-19 09:35:35.536594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.198 [2024-11-19 09:35:35.596689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.198  [2024-11-19T09:35:36.081Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:48.458 00:07:48.458 09:35:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:48.458 09:35:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:48.458 09:35:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:48.458 09:35:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.458 { 00:07:48.458 "subsystems": [ 00:07:48.458 { 00:07:48.458 "subsystem": "bdev", 00:07:48.458 "config": [ 00:07:48.458 { 00:07:48.458 "params": { 00:07:48.458 "trtype": "pcie", 00:07:48.458 "traddr": "0000:00:10.0", 00:07:48.458 "name": "Nvme0" 00:07:48.458 }, 00:07:48.458 "method": "bdev_nvme_attach_controller" 00:07:48.458 }, 00:07:48.458 { 00:07:48.458 "method": "bdev_wait_for_examine" 00:07:48.458 } 00:07:48.458 ] 00:07:48.458 } 00:07:48.458 ] 00:07:48.458 } 00:07:48.458 [2024-11-19 09:35:35.981970] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:48.458 [2024-11-19 09:35:35.982078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59939 ] 00:07:48.717 [2024-11-19 09:35:36.136658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.717 [2024-11-19 09:35:36.205386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.717 [2024-11-19 09:35:36.267345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.976  [2024-11-19T09:35:36.599Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:48.976 00:07:48.976 09:35:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.976 09:35:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:48.976 09:35:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:48.976 09:35:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:48.976 09:35:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:48.976 09:35:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:48.976 09:35:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:48.976 09:35:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:48.976 09:35:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:48.976 09:35:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:48.976 09:35:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.235 [2024-11-19 09:35:36.635456] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:49.235 [2024-11-19 09:35:36.635559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59959 ] 00:07:49.235 { 00:07:49.235 "subsystems": [ 00:07:49.235 { 00:07:49.235 "subsystem": "bdev", 00:07:49.235 "config": [ 00:07:49.235 { 00:07:49.235 "params": { 00:07:49.235 "trtype": "pcie", 00:07:49.235 "traddr": "0000:00:10.0", 00:07:49.235 "name": "Nvme0" 00:07:49.235 }, 00:07:49.235 "method": "bdev_nvme_attach_controller" 00:07:49.235 }, 00:07:49.235 { 00:07:49.235 "method": "bdev_wait_for_examine" 00:07:49.235 } 00:07:49.235 ] 00:07:49.235 } 00:07:49.235 ] 00:07:49.235 } 00:07:49.235 [2024-11-19 09:35:36.782766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.235 [2024-11-19 09:35:36.839742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.533 [2024-11-19 09:35:36.896110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.533  [2024-11-19T09:35:37.418Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:49.795 00:07:49.795 09:35:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:49.795 09:35:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:49.795 09:35:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:49.795 09:35:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:49.795 09:35:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:49.795 09:35:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:49.795 09:35:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:50.369 09:35:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:50.369 09:35:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:50.369 09:35:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:50.369 09:35:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:50.369 [2024-11-19 09:35:37.791626] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:50.369 [2024-11-19 09:35:37.791726] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59981 ] 00:07:50.369 { 00:07:50.369 "subsystems": [ 00:07:50.369 { 00:07:50.369 "subsystem": "bdev", 00:07:50.369 "config": [ 00:07:50.369 { 00:07:50.369 "params": { 00:07:50.369 "trtype": "pcie", 00:07:50.369 "traddr": "0000:00:10.0", 00:07:50.369 "name": "Nvme0" 00:07:50.369 }, 00:07:50.369 "method": "bdev_nvme_attach_controller" 00:07:50.369 }, 00:07:50.369 { 00:07:50.369 "method": "bdev_wait_for_examine" 00:07:50.369 } 00:07:50.369 ] 00:07:50.369 } 00:07:50.369 ] 00:07:50.369 } 00:07:50.369 [2024-11-19 09:35:37.944707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.628 [2024-11-19 09:35:38.013598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.628 [2024-11-19 09:35:38.075868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.628  [2024-11-19T09:35:38.509Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:50.886 00:07:50.886 09:35:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:50.886 09:35:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:50.886 09:35:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:50.886 09:35:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:50.886 { 00:07:50.886 "subsystems": [ 00:07:50.886 { 00:07:50.886 "subsystem": "bdev", 00:07:50.886 "config": [ 00:07:50.886 { 00:07:50.886 "params": { 00:07:50.886 "trtype": "pcie", 00:07:50.886 "traddr": "0000:00:10.0", 00:07:50.886 "name": "Nvme0" 00:07:50.886 }, 00:07:50.886 "method": "bdev_nvme_attach_controller" 00:07:50.886 }, 00:07:50.886 { 00:07:50.886 "method": "bdev_wait_for_examine" 00:07:50.886 } 00:07:50.886 ] 00:07:50.886 } 00:07:50.886 ] 00:07:50.886 } 00:07:50.886 [2024-11-19 09:35:38.444677] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:50.886 [2024-11-19 09:35:38.444787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59995 ] 00:07:51.145 [2024-11-19 09:35:38.593159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.145 [2024-11-19 09:35:38.648778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.145 [2024-11-19 09:35:38.709990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.404  [2024-11-19T09:35:39.028Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:51.405 00:07:51.405 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.405 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:51.405 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:51.405 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:51.405 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:51.405 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:51.405 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:51.405 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:51.405 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:51.405 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:51.405 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:51.663 [2024-11-19 09:35:39.069089] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:51.663 [2024-11-19 09:35:39.069185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60010 ] 00:07:51.663 { 00:07:51.663 "subsystems": [ 00:07:51.663 { 00:07:51.663 "subsystem": "bdev", 00:07:51.663 "config": [ 00:07:51.663 { 00:07:51.663 "params": { 00:07:51.663 "trtype": "pcie", 00:07:51.663 "traddr": "0000:00:10.0", 00:07:51.663 "name": "Nvme0" 00:07:51.663 }, 00:07:51.663 "method": "bdev_nvme_attach_controller" 00:07:51.663 }, 00:07:51.663 { 00:07:51.663 "method": "bdev_wait_for_examine" 00:07:51.663 } 00:07:51.663 ] 00:07:51.663 } 00:07:51.663 ] 00:07:51.663 } 00:07:51.663 [2024-11-19 09:35:39.214165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.663 [2024-11-19 09:35:39.275888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.922 [2024-11-19 09:35:39.340115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.922  [2024-11-19T09:35:39.804Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:52.181 00:07:52.181 00:07:52.181 real 0m14.909s 00:07:52.181 user 0m10.911s 00:07:52.181 sys 0m5.586s 00:07:52.181 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.181 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:52.181 ************************************ 00:07:52.181 END TEST dd_rw 00:07:52.181 ************************************ 00:07:52.181 09:35:39 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:52.181 09:35:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.181 09:35:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.181 09:35:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:52.181 ************************************ 00:07:52.181 START TEST dd_rw_offset 00:07:52.181 ************************************ 00:07:52.181 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:07:52.181 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:52.181 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:52.181 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:52.182 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:52.182 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:52.182 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=l3sal7rrgbhpru7c5wp273jtdfj4vbmov46aw959rhe87e0f1ybiqmiln668a6vb8jvdp3xhcujnpljkbmxxpte2oa5c0ycn6uo74nn1wtdtvmhwv9nwyghyhu1aasezq7sh95gbuj79v2a6krx7f7hykrhj59pswf17rjg58067codtm3zr5kfqmytrte257ei99omah40ufl6ckrghf8xv39qcyuygae8d1t3u3mzdododujjpl5knv4cnu94hhnvqh90a9p6kivtl4ed104vzkkluur0sokvuazbohybkyp2mp3mo2yes2g52hppegfueck91kn9l6ibcuhxbbz9hldhulscjb4l6qdw5u7rp5pn3jjjaxwjx2zuc2lkox16xxore2ohah6ytmpxm53xqfcf686cqc081mvnux3e9ziahymsspch1td1espnzgrt6o25wyixalk4yfhgyuykfw1gbaa4bu7q1rwgxnry9gh8jc4fjll7d55aj7vjum97hhenakg94x06ozjj0xierw2aji5rtxigqqb1rsf0egvekpg5r7o3jbdec2qhsvalt7uxpnvwj8a8fu0v853yegkxszjzd62c6orq4ub8prytcw6vbpe1khhhxqfr5jft388qgc5k6qfsp1ofkckxj11d7m5inya2sxvihe3ckuw26671n06ntpxvsmrokas21h2dol0jcoo0g9siptzo8jg50hgpujc2ge3whw4dj1qpfw3dkw7z3mqwhjao8vkca7hww1q164a7yvhoj8xnjqakulhz9gr391kldawej2dp49xbiaj9kwvht5e91eamnx9d3pa7pf335wnhb6kuciczuf9572se7galdi4ydjgtw8qyd8vebss57odajdsqgd8pwtnq4xi5sgr99di1pjcj19f7nmbkva3mhgysz2i0y5qja0ajcgrapyhdc3q7x88vgxaxkw5bof4tj295e0a3pv1q2xew5m7pf4l124e98ylult8bobwqw2pdmy4q1n20un26rzxuzo2dzt5ki2q6lfvm4e2o90alq4j55yutw0j6mfvtfgy64ytkgai6tw7qhmgx7v4h2bf0onucgfdrzdf8fmzvytkqkp2qoeapy9xkson7mveelb7i0nw8uzbo5ra9rj1il64ryziib6e4uvm4tygzzfmevlchg2q2pkp26m6zkki1gtz9wtc2obqpgul31w00bkxhlwbl8udu7a10qf7u5yg63ooiqhe9rg5j0fq20hfmsalbtog4t9oufs30oq48p7vlepy8i8795zs5ga6s1fehb5fg0y9ijdwnusn2cwy42e4swfxn2jj36q3key8i19hlzbkees2z7zpwbdcgy5l9029moorff739l5w2ns0xon647n7eu3fodcnwp9bjav85rt4mi6anwwvyjmsdyoyy5pwh80ubp20wep1fg9o1zxreb04ipof3g6n55fcegs0y2kx9xhj21uxwlsf8i4e7hem62g5zd8rnwvwepdlvoc9dty2zuco4avfcm4fg50ds6sgk52156ejoph00f2uhf0a18poaheyrxx448213wt0br02sbq1ofuyj5rir76lz6y936xyyygmbzdyb6ueastqcam219raauy5ljbj61u6btxfoc0thy2ricaf7ru5qq5u0cvway5wi0vo62uft2pm2k9zojeh4g8c61mt86s7jt3su5f49jluplw6m7xxzu5hilha1ebunsj2ntv9y4176z3c4wij27mbuw6xro3kcvd5g5omr3vncbfqujyt26nn6p1feignfajp11ew70k1ux8k6dcfguebvcfeupavuvnhlq67jmyf1sl437a3byk4c895xwp2q45dfnng18jpqfpavdpfrelbbg2twph0kysezkxj8dgb6emcihlc7n50z1bhvhei3yro7vblktk45exvkzo9pkzppu78s8tu7tlpl8ke9zgtyxexoxtp6n1d841tn1kygbatnskikqw5aicvuc6lkrrk8oc7nymmas418es1wuve8eyuyl8pcsvfbyedcn278gazfcej3qhfwlwxjpwii0k9m3k1tovp892wfjm79i3qsyzx41shnmvz1nffryrh2qln6adycl1yxmuxg93linnwzf0gl6g57wk2d4a1qc6nfwwm7zuooolyc21b40a9j711gmjxtmtngb4z8jbfn3zsd6d2cowbyoctojr8j72av9jpeabewsqe986nz1pt5rcra9afir1b13l6dfw94jtuvkmzjuuit6r1mxdm0sgiad1p2ch51iyjgbuxoimfcw4alax9in6ewnym68zqgfkk143ksbiwq1tr3e6qs61o0n7ceidy7t9t5lr2cxnjnsu4l06pv9hans2ytzbde1obrfif4owq2uo1q0vx3njx6wqb2vicz0fhrefc4mftrwxxrr7fsxe3sf9zpqscm6aexcieoaxv57mbiefspcog4xmhat4tmpmto3k8h7vfm40zsi3p43wbqdvjsbx8x6q9hvgixhryphozknfan03xj5t54asi1f1mtmkd8kc64350mhumikujdr11fi5puj29p31tpv5ooxbygmj9fkdkce8up33shrp6nw1imlqe7w5kv72n1hazvbegry0anj5rmrbufqoumx52qsi95d6nwh39d2z7hcvftqz3s781qq2xqwo5ce9dtdjits6wbf8rtjozzwm5hzbj77k1319nut8ezfwv4mje5mc4bce837wc6asexnow4280gakabxortqke12f4iv1tqjwz0nf5w8nxk0h5mtjjyvoegnmtqzmsrthxtoalfy8x1oj8tidbqnu78vt9j5iw1i9y4g4pqc2a9vu350w2q5qj2m2g5u1vcyxg2puesqqxsigexl0oh7dw61s3nvukjbh2vron7rby4wsc37jc8vnm6uoeueee8ogxjq3gy75imrvzgsegx9ur5io2kwdq5qcflypgo7qq9qpggr90ikg5d8i6mjl9vpbcmwb2eqgh01az2hr7q9smk4iercbc8349ggva0jegbjfd6qczgsqzpkgdpawbryunicgmwuyhd7xzkwm6x06gkp14aq6mk089nsusmgjlponm62xrzrr2kffrh2hl66c14aowok87tzky9wbkjlpjph4ea83yqgcjvf908gtezsz1i4436m4726zouxsw4sx140wk5r8ulmpl9k7nbagsnmv1u93n611y806ytz0btuuuu76a23k9wusyhns6we6yw4kcnkiw8iw2laptri0ctrx40gm4901ksakf44cporexaoq903xxnm019rvo24tik2u3o8bhvpy16tib2xsy4u5v0cmf6pgs3m4zeuzmn6lsy2sq07yofijno624008aibkxjqiyul7rcpf2ju5l1eox098r55tdhs4jnz2j3a031bl5p8fzgmleki9apv2ncub25o04k8foltk9utes28h9r62cuaevhel937q89pdqyx0j7p4w49ga9va4hmhwt91esmdfvco1ppm0nm6ox96s3yea1fikbzoqm9iqmsiug2lxt433bgf6uiewe8m9701bas9l5lpn5sl913isww2yq81iftnt9c8xytgwga5akyyx9xq9cu0pbt0m1sifuh8trbszov349racb5tib6ax3shhijkpbfay23oi9ni86x21bf155hz54a22zfa91juail0jgv6rjnyl374sdy9vknl0l1xmfefw599q1s0fuqlkmcc6q9hgqdkxo7302id31r8k2ptpt4ia8llgdg62r0izba8tbpq1nrqajj6xvkzh8uc17xzj53p1owfpbyqqpc2f01htr2kp4sietjk2dqm10xwmsn8hywniis3fjzuzub0mtrkkkze6ezwc40coyx1x5mrolztyusm9kgvhb3g3nlbd1aiouwr5lrqw8mlt4h7t4rua7z1zs1jk2ma2ombmsdzez6k2ojvp5tzlr2krt3xitkogcfeabvkocrkolq3d5fkl154y3uonn5k58dnweo58k47762p2fjit8h 00:07:52.182 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:52.182 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:52.182 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:52.182 09:35:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:52.441 [2024-11-19 09:35:39.807346] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:52.441 [2024-11-19 09:35:39.807442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60045 ] 00:07:52.441 { 00:07:52.441 "subsystems": [ 00:07:52.441 { 00:07:52.441 "subsystem": "bdev", 00:07:52.441 "config": [ 00:07:52.441 { 00:07:52.441 "params": { 00:07:52.441 "trtype": "pcie", 00:07:52.441 "traddr": "0000:00:10.0", 00:07:52.441 "name": "Nvme0" 00:07:52.441 }, 00:07:52.441 "method": "bdev_nvme_attach_controller" 00:07:52.441 }, 00:07:52.441 { 00:07:52.441 "method": "bdev_wait_for_examine" 00:07:52.441 } 00:07:52.441 ] 00:07:52.441 } 00:07:52.441 ] 00:07:52.441 } 00:07:52.441 [2024-11-19 09:35:39.948068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.441 [2024-11-19 09:35:40.004892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.704 [2024-11-19 09:35:40.067378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.704  [2024-11-19T09:35:40.584Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:52.962 00:07:52.962 09:35:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:52.962 09:35:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:52.962 09:35:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:52.962 09:35:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:52.962 { 00:07:52.962 "subsystems": [ 00:07:52.962 { 00:07:52.962 "subsystem": "bdev", 00:07:52.962 "config": [ 00:07:52.962 { 00:07:52.962 "params": { 00:07:52.962 "trtype": "pcie", 00:07:52.962 "traddr": "0000:00:10.0", 00:07:52.962 "name": "Nvme0" 00:07:52.962 }, 00:07:52.962 "method": "bdev_nvme_attach_controller" 00:07:52.962 }, 00:07:52.962 { 00:07:52.962 "method": "bdev_wait_for_examine" 00:07:52.962 } 00:07:52.962 ] 00:07:52.962 } 00:07:52.962 ] 00:07:52.962 } 00:07:52.962 [2024-11-19 09:35:40.440077] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:52.962 [2024-11-19 09:35:40.440170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60060 ] 00:07:52.962 [2024-11-19 09:35:40.583361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.220 [2024-11-19 09:35:40.631199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.220 [2024-11-19 09:35:40.686196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.220  [2024-11-19T09:35:41.104Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:53.481 00:07:53.481 09:35:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:53.482 09:35:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ l3sal7rrgbhpru7c5wp273jtdfj4vbmov46aw959rhe87e0f1ybiqmiln668a6vb8jvdp3xhcujnpljkbmxxpte2oa5c0ycn6uo74nn1wtdtvmhwv9nwyghyhu1aasezq7sh95gbuj79v2a6krx7f7hykrhj59pswf17rjg58067codtm3zr5kfqmytrte257ei99omah40ufl6ckrghf8xv39qcyuygae8d1t3u3mzdododujjpl5knv4cnu94hhnvqh90a9p6kivtl4ed104vzkkluur0sokvuazbohybkyp2mp3mo2yes2g52hppegfueck91kn9l6ibcuhxbbz9hldhulscjb4l6qdw5u7rp5pn3jjjaxwjx2zuc2lkox16xxore2ohah6ytmpxm53xqfcf686cqc081mvnux3e9ziahymsspch1td1espnzgrt6o25wyixalk4yfhgyuykfw1gbaa4bu7q1rwgxnry9gh8jc4fjll7d55aj7vjum97hhenakg94x06ozjj0xierw2aji5rtxigqqb1rsf0egvekpg5r7o3jbdec2qhsvalt7uxpnvwj8a8fu0v853yegkxszjzd62c6orq4ub8prytcw6vbpe1khhhxqfr5jft388qgc5k6qfsp1ofkckxj11d7m5inya2sxvihe3ckuw26671n06ntpxvsmrokas21h2dol0jcoo0g9siptzo8jg50hgpujc2ge3whw4dj1qpfw3dkw7z3mqwhjao8vkca7hww1q164a7yvhoj8xnjqakulhz9gr391kldawej2dp49xbiaj9kwvht5e91eamnx9d3pa7pf335wnhb6kuciczuf9572se7galdi4ydjgtw8qyd8vebss57odajdsqgd8pwtnq4xi5sgr99di1pjcj19f7nmbkva3mhgysz2i0y5qja0ajcgrapyhdc3q7x88vgxaxkw5bof4tj295e0a3pv1q2xew5m7pf4l124e98ylult8bobwqw2pdmy4q1n20un26rzxuzo2dzt5ki2q6lfvm4e2o90alq4j55yutw0j6mfvtfgy64ytkgai6tw7qhmgx7v4h2bf0onucgfdrzdf8fmzvytkqkp2qoeapy9xkson7mveelb7i0nw8uzbo5ra9rj1il64ryziib6e4uvm4tygzzfmevlchg2q2pkp26m6zkki1gtz9wtc2obqpgul31w00bkxhlwbl8udu7a10qf7u5yg63ooiqhe9rg5j0fq20hfmsalbtog4t9oufs30oq48p7vlepy8i8795zs5ga6s1fehb5fg0y9ijdwnusn2cwy42e4swfxn2jj36q3key8i19hlzbkees2z7zpwbdcgy5l9029moorff739l5w2ns0xon647n7eu3fodcnwp9bjav85rt4mi6anwwvyjmsdyoyy5pwh80ubp20wep1fg9o1zxreb04ipof3g6n55fcegs0y2kx9xhj21uxwlsf8i4e7hem62g5zd8rnwvwepdlvoc9dty2zuco4avfcm4fg50ds6sgk52156ejoph00f2uhf0a18poaheyrxx448213wt0br02sbq1ofuyj5rir76lz6y936xyyygmbzdyb6ueastqcam219raauy5ljbj61u6btxfoc0thy2ricaf7ru5qq5u0cvway5wi0vo62uft2pm2k9zojeh4g8c61mt86s7jt3su5f49jluplw6m7xxzu5hilha1ebunsj2ntv9y4176z3c4wij27mbuw6xro3kcvd5g5omr3vncbfqujyt26nn6p1feignfajp11ew70k1ux8k6dcfguebvcfeupavuvnhlq67jmyf1sl437a3byk4c895xwp2q45dfnng18jpqfpavdpfrelbbg2twph0kysezkxj8dgb6emcihlc7n50z1bhvhei3yro7vblktk45exvkzo9pkzppu78s8tu7tlpl8ke9zgtyxexoxtp6n1d841tn1kygbatnskikqw5aicvuc6lkrrk8oc7nymmas418es1wuve8eyuyl8pcsvfbyedcn278gazfcej3qhfwlwxjpwii0k9m3k1tovp892wfjm79i3qsyzx41shnmvz1nffryrh2qln6adycl1yxmuxg93linnwzf0gl6g57wk2d4a1qc6nfwwm7zuooolyc21b40a9j711gmjxtmtngb4z8jbfn3zsd6d2cowbyoctojr8j72av9jpeabewsqe986nz1pt5rcra9afir1b13l6dfw94jtuvkmzjuuit6r1mxdm0sgiad1p2ch51iyjgbuxoimfcw4alax9in6ewnym68zqgfkk143ksbiwq1tr3e6qs61o0n7ceidy7t9t5lr2cxnjnsu4l06pv9hans2ytzbde1obrfif4owq2uo1q0vx3njx6wqb2vicz0fhrefc4mftrwxxrr7fsxe3sf9zpqscm6aexcieoaxv57mbiefspcog4xmhat4tmpmto3k8h7vfm40zsi3p43wbqdvjsbx8x6q9hvgixhryphozknfan03xj5t54asi1f1mtmkd8kc64350mhumikujdr11fi5puj29p31tpv5ooxbygmj9fkdkce8up33shrp6nw1imlqe7w5kv72n1hazvbegry0anj5rmrbufqoumx52qsi95d6nwh39d2z7hcvftqz3s781qq2xqwo5ce9dtdjits6wbf8rtjozzwm5hzbj77k1319nut8ezfwv4mje5mc4bce837wc6asexnow4280gakabxortqke12f4iv1tqjwz0nf5w8nxk0h5mtjjyvoegnmtqzmsrthxtoalfy8x1oj8tidbqnu78vt9j5iw1i9y4g4pqc2a9vu350w2q5qj2m2g5u1vcyxg2puesqqxsigexl0oh7dw61s3nvukjbh2vron7rby4wsc37jc8vnm6uoeueee8ogxjq3gy75imrvzgsegx9ur5io2kwdq5qcflypgo7qq9qpggr90ikg5d8i6mjl9vpbcmwb2eqgh01az2hr7q9smk4iercbc8349ggva0jegbjfd6qczgsqzpkgdpawbryunicgmwuyhd7xzkwm6x06gkp14aq6mk089nsusmgjlponm62xrzrr2kffrh2hl66c14aowok87tzky9wbkjlpjph4ea83yqgcjvf908gtezsz1i4436m4726zouxsw4sx140wk5r8ulmpl9k7nbagsnmv1u93n611y806ytz0btuuuu76a23k9wusyhns6we6yw4kcnkiw8iw2laptri0ctrx40gm4901ksakf44cporexaoq903xxnm019rvo24tik2u3o8bhvpy16tib2xsy4u5v0cmf6pgs3m4zeuzmn6lsy2sq07yofijno624008aibkxjqiyul7rcpf2ju5l1eox098r55tdhs4jnz2j3a031bl5p8fzgmleki9apv2ncub25o04k8foltk9utes28h9r62cuaevhel937q89pdqyx0j7p4w49ga9va4hmhwt91esmdfvco1ppm0nm6ox96s3yea1fikbzoqm9iqmsiug2lxt433bgf6uiewe8m9701bas9l5lpn5sl913isww2yq81iftnt9c8xytgwga5akyyx9xq9cu0pbt0m1sifuh8trbszov349racb5tib6ax3shhijkpbfay23oi9ni86x21bf155hz54a22zfa91juail0jgv6rjnyl374sdy9vknl0l1xmfefw599q1s0fuqlkmcc6q9hgqdkxo7302id31r8k2ptpt4ia8llgdg62r0izba8tbpq1nrqajj6xvkzh8uc17xzj53p1owfpbyqqpc2f01htr2kp4sietjk2dqm10xwmsn8hywniis3fjzuzub0mtrkkkze6ezwc40coyx1x5mrolztyusm9kgvhb3g3nlbd1aiouwr5lrqw8mlt4h7t4rua7z1zs1jk2ma2ombmsdzez6k2ojvp5tzlr2krt3xitkogcfeabvkocrkolq3d5fkl154y3uonn5k58dnweo58k47762p2fjit8h == \l\3\s\a\l\7\r\r\g\b\h\p\r\u\7\c\5\w\p\2\7\3\j\t\d\f\j\4\v\b\m\o\v\4\6\a\w\9\5\9\r\h\e\8\7\e\0\f\1\y\b\i\q\m\i\l\n\6\6\8\a\6\v\b\8\j\v\d\p\3\x\h\c\u\j\n\p\l\j\k\b\m\x\x\p\t\e\2\o\a\5\c\0\y\c\n\6\u\o\7\4\n\n\1\w\t\d\t\v\m\h\w\v\9\n\w\y\g\h\y\h\u\1\a\a\s\e\z\q\7\s\h\9\5\g\b\u\j\7\9\v\2\a\6\k\r\x\7\f\7\h\y\k\r\h\j\5\9\p\s\w\f\1\7\r\j\g\5\8\0\6\7\c\o\d\t\m\3\z\r\5\k\f\q\m\y\t\r\t\e\2\5\7\e\i\9\9\o\m\a\h\4\0\u\f\l\6\c\k\r\g\h\f\8\x\v\3\9\q\c\y\u\y\g\a\e\8\d\1\t\3\u\3\m\z\d\o\d\o\d\u\j\j\p\l\5\k\n\v\4\c\n\u\9\4\h\h\n\v\q\h\9\0\a\9\p\6\k\i\v\t\l\4\e\d\1\0\4\v\z\k\k\l\u\u\r\0\s\o\k\v\u\a\z\b\o\h\y\b\k\y\p\2\m\p\3\m\o\2\y\e\s\2\g\5\2\h\p\p\e\g\f\u\e\c\k\9\1\k\n\9\l\6\i\b\c\u\h\x\b\b\z\9\h\l\d\h\u\l\s\c\j\b\4\l\6\q\d\w\5\u\7\r\p\5\p\n\3\j\j\j\a\x\w\j\x\2\z\u\c\2\l\k\o\x\1\6\x\x\o\r\e\2\o\h\a\h\6\y\t\m\p\x\m\5\3\x\q\f\c\f\6\8\6\c\q\c\0\8\1\m\v\n\u\x\3\e\9\z\i\a\h\y\m\s\s\p\c\h\1\t\d\1\e\s\p\n\z\g\r\t\6\o\2\5\w\y\i\x\a\l\k\4\y\f\h\g\y\u\y\k\f\w\1\g\b\a\a\4\b\u\7\q\1\r\w\g\x\n\r\y\9\g\h\8\j\c\4\f\j\l\l\7\d\5\5\a\j\7\v\j\u\m\9\7\h\h\e\n\a\k\g\9\4\x\0\6\o\z\j\j\0\x\i\e\r\w\2\a\j\i\5\r\t\x\i\g\q\q\b\1\r\s\f\0\e\g\v\e\k\p\g\5\r\7\o\3\j\b\d\e\c\2\q\h\s\v\a\l\t\7\u\x\p\n\v\w\j\8\a\8\f\u\0\v\8\5\3\y\e\g\k\x\s\z\j\z\d\6\2\c\6\o\r\q\4\u\b\8\p\r\y\t\c\w\6\v\b\p\e\1\k\h\h\h\x\q\f\r\5\j\f\t\3\8\8\q\g\c\5\k\6\q\f\s\p\1\o\f\k\c\k\x\j\1\1\d\7\m\5\i\n\y\a\2\s\x\v\i\h\e\3\c\k\u\w\2\6\6\7\1\n\0\6\n\t\p\x\v\s\m\r\o\k\a\s\2\1\h\2\d\o\l\0\j\c\o\o\0\g\9\s\i\p\t\z\o\8\j\g\5\0\h\g\p\u\j\c\2\g\e\3\w\h\w\4\d\j\1\q\p\f\w\3\d\k\w\7\z\3\m\q\w\h\j\a\o\8\v\k\c\a\7\h\w\w\1\q\1\6\4\a\7\y\v\h\o\j\8\x\n\j\q\a\k\u\l\h\z\9\g\r\3\9\1\k\l\d\a\w\e\j\2\d\p\4\9\x\b\i\a\j\9\k\w\v\h\t\5\e\9\1\e\a\m\n\x\9\d\3\p\a\7\p\f\3\3\5\w\n\h\b\6\k\u\c\i\c\z\u\f\9\5\7\2\s\e\7\g\a\l\d\i\4\y\d\j\g\t\w\8\q\y\d\8\v\e\b\s\s\5\7\o\d\a\j\d\s\q\g\d\8\p\w\t\n\q\4\x\i\5\s\g\r\9\9\d\i\1\p\j\c\j\1\9\f\7\n\m\b\k\v\a\3\m\h\g\y\s\z\2\i\0\y\5\q\j\a\0\a\j\c\g\r\a\p\y\h\d\c\3\q\7\x\8\8\v\g\x\a\x\k\w\5\b\o\f\4\t\j\2\9\5\e\0\a\3\p\v\1\q\2\x\e\w\5\m\7\p\f\4\l\1\2\4\e\9\8\y\l\u\l\t\8\b\o\b\w\q\w\2\p\d\m\y\4\q\1\n\2\0\u\n\2\6\r\z\x\u\z\o\2\d\z\t\5\k\i\2\q\6\l\f\v\m\4\e\2\o\9\0\a\l\q\4\j\5\5\y\u\t\w\0\j\6\m\f\v\t\f\g\y\6\4\y\t\k\g\a\i\6\t\w\7\q\h\m\g\x\7\v\4\h\2\b\f\0\o\n\u\c\g\f\d\r\z\d\f\8\f\m\z\v\y\t\k\q\k\p\2\q\o\e\a\p\y\9\x\k\s\o\n\7\m\v\e\e\l\b\7\i\0\n\w\8\u\z\b\o\5\r\a\9\r\j\1\i\l\6\4\r\y\z\i\i\b\6\e\4\u\v\m\4\t\y\g\z\z\f\m\e\v\l\c\h\g\2\q\2\p\k\p\2\6\m\6\z\k\k\i\1\g\t\z\9\w\t\c\2\o\b\q\p\g\u\l\3\1\w\0\0\b\k\x\h\l\w\b\l\8\u\d\u\7\a\1\0\q\f\7\u\5\y\g\6\3\o\o\i\q\h\e\9\r\g\5\j\0\f\q\2\0\h\f\m\s\a\l\b\t\o\g\4\t\9\o\u\f\s\3\0\o\q\4\8\p\7\v\l\e\p\y\8\i\8\7\9\5\z\s\5\g\a\6\s\1\f\e\h\b\5\f\g\0\y\9\i\j\d\w\n\u\s\n\2\c\w\y\4\2\e\4\s\w\f\x\n\2\j\j\3\6\q\3\k\e\y\8\i\1\9\h\l\z\b\k\e\e\s\2\z\7\z\p\w\b\d\c\g\y\5\l\9\0\2\9\m\o\o\r\f\f\7\3\9\l\5\w\2\n\s\0\x\o\n\6\4\7\n\7\e\u\3\f\o\d\c\n\w\p\9\b\j\a\v\8\5\r\t\4\m\i\6\a\n\w\w\v\y\j\m\s\d\y\o\y\y\5\p\w\h\8\0\u\b\p\2\0\w\e\p\1\f\g\9\o\1\z\x\r\e\b\0\4\i\p\o\f\3\g\6\n\5\5\f\c\e\g\s\0\y\2\k\x\9\x\h\j\2\1\u\x\w\l\s\f\8\i\4\e\7\h\e\m\6\2\g\5\z\d\8\r\n\w\v\w\e\p\d\l\v\o\c\9\d\t\y\2\z\u\c\o\4\a\v\f\c\m\4\f\g\5\0\d\s\6\s\g\k\5\2\1\5\6\e\j\o\p\h\0\0\f\2\u\h\f\0\a\1\8\p\o\a\h\e\y\r\x\x\4\4\8\2\1\3\w\t\0\b\r\0\2\s\b\q\1\o\f\u\y\j\5\r\i\r\7\6\l\z\6\y\9\3\6\x\y\y\y\g\m\b\z\d\y\b\6\u\e\a\s\t\q\c\a\m\2\1\9\r\a\a\u\y\5\l\j\b\j\6\1\u\6\b\t\x\f\o\c\0\t\h\y\2\r\i\c\a\f\7\r\u\5\q\q\5\u\0\c\v\w\a\y\5\w\i\0\v\o\6\2\u\f\t\2\p\m\2\k\9\z\o\j\e\h\4\g\8\c\6\1\m\t\8\6\s\7\j\t\3\s\u\5\f\4\9\j\l\u\p\l\w\6\m\7\x\x\z\u\5\h\i\l\h\a\1\e\b\u\n\s\j\2\n\t\v\9\y\4\1\7\6\z\3\c\4\w\i\j\2\7\m\b\u\w\6\x\r\o\3\k\c\v\d\5\g\5\o\m\r\3\v\n\c\b\f\q\u\j\y\t\2\6\n\n\6\p\1\f\e\i\g\n\f\a\j\p\1\1\e\w\7\0\k\1\u\x\8\k\6\d\c\f\g\u\e\b\v\c\f\e\u\p\a\v\u\v\n\h\l\q\6\7\j\m\y\f\1\s\l\4\3\7\a\3\b\y\k\4\c\8\9\5\x\w\p\2\q\4\5\d\f\n\n\g\1\8\j\p\q\f\p\a\v\d\p\f\r\e\l\b\b\g\2\t\w\p\h\0\k\y\s\e\z\k\x\j\8\d\g\b\6\e\m\c\i\h\l\c\7\n\5\0\z\1\b\h\v\h\e\i\3\y\r\o\7\v\b\l\k\t\k\4\5\e\x\v\k\z\o\9\p\k\z\p\p\u\7\8\s\8\t\u\7\t\l\p\l\8\k\e\9\z\g\t\y\x\e\x\o\x\t\p\6\n\1\d\8\4\1\t\n\1\k\y\g\b\a\t\n\s\k\i\k\q\w\5\a\i\c\v\u\c\6\l\k\r\r\k\8\o\c\7\n\y\m\m\a\s\4\1\8\e\s\1\w\u\v\e\8\e\y\u\y\l\8\p\c\s\v\f\b\y\e\d\c\n\2\7\8\g\a\z\f\c\e\j\3\q\h\f\w\l\w\x\j\p\w\i\i\0\k\9\m\3\k\1\t\o\v\p\8\9\2\w\f\j\m\7\9\i\3\q\s\y\z\x\4\1\s\h\n\m\v\z\1\n\f\f\r\y\r\h\2\q\l\n\6\a\d\y\c\l\1\y\x\m\u\x\g\9\3\l\i\n\n\w\z\f\0\g\l\6\g\5\7\w\k\2\d\4\a\1\q\c\6\n\f\w\w\m\7\z\u\o\o\o\l\y\c\2\1\b\4\0\a\9\j\7\1\1\g\m\j\x\t\m\t\n\g\b\4\z\8\j\b\f\n\3\z\s\d\6\d\2\c\o\w\b\y\o\c\t\o\j\r\8\j\7\2\a\v\9\j\p\e\a\b\e\w\s\q\e\9\8\6\n\z\1\p\t\5\r\c\r\a\9\a\f\i\r\1\b\1\3\l\6\d\f\w\9\4\j\t\u\v\k\m\z\j\u\u\i\t\6\r\1\m\x\d\m\0\s\g\i\a\d\1\p\2\c\h\5\1\i\y\j\g\b\u\x\o\i\m\f\c\w\4\a\l\a\x\9\i\n\6\e\w\n\y\m\6\8\z\q\g\f\k\k\1\4\3\k\s\b\i\w\q\1\t\r\3\e\6\q\s\6\1\o\0\n\7\c\e\i\d\y\7\t\9\t\5\l\r\2\c\x\n\j\n\s\u\4\l\0\6\p\v\9\h\a\n\s\2\y\t\z\b\d\e\1\o\b\r\f\i\f\4\o\w\q\2\u\o\1\q\0\v\x\3\n\j\x\6\w\q\b\2\v\i\c\z\0\f\h\r\e\f\c\4\m\f\t\r\w\x\x\r\r\7\f\s\x\e\3\s\f\9\z\p\q\s\c\m\6\a\e\x\c\i\e\o\a\x\v\5\7\m\b\i\e\f\s\p\c\o\g\4\x\m\h\a\t\4\t\m\p\m\t\o\3\k\8\h\7\v\f\m\4\0\z\s\i\3\p\4\3\w\b\q\d\v\j\s\b\x\8\x\6\q\9\h\v\g\i\x\h\r\y\p\h\o\z\k\n\f\a\n\0\3\x\j\5\t\5\4\a\s\i\1\f\1\m\t\m\k\d\8\k\c\6\4\3\5\0\m\h\u\m\i\k\u\j\d\r\1\1\f\i\5\p\u\j\2\9\p\3\1\t\p\v\5\o\o\x\b\y\g\m\j\9\f\k\d\k\c\e\8\u\p\3\3\s\h\r\p\6\n\w\1\i\m\l\q\e\7\w\5\k\v\7\2\n\1\h\a\z\v\b\e\g\r\y\0\a\n\j\5\r\m\r\b\u\f\q\o\u\m\x\5\2\q\s\i\9\5\d\6\n\w\h\3\9\d\2\z\7\h\c\v\f\t\q\z\3\s\7\8\1\q\q\2\x\q\w\o\5\c\e\9\d\t\d\j\i\t\s\6\w\b\f\8\r\t\j\o\z\z\w\m\5\h\z\b\j\7\7\k\1\3\1\9\n\u\t\8\e\z\f\w\v\4\m\j\e\5\m\c\4\b\c\e\8\3\7\w\c\6\a\s\e\x\n\o\w\4\2\8\0\g\a\k\a\b\x\o\r\t\q\k\e\1\2\f\4\i\v\1\t\q\j\w\z\0\n\f\5\w\8\n\x\k\0\h\5\m\t\j\j\y\v\o\e\g\n\m\t\q\z\m\s\r\t\h\x\t\o\a\l\f\y\8\x\1\o\j\8\t\i\d\b\q\n\u\7\8\v\t\9\j\5\i\w\1\i\9\y\4\g\4\p\q\c\2\a\9\v\u\3\5\0\w\2\q\5\q\j\2\m\2\g\5\u\1\v\c\y\x\g\2\p\u\e\s\q\q\x\s\i\g\e\x\l\0\o\h\7\d\w\6\1\s\3\n\v\u\k\j\b\h\2\v\r\o\n\7\r\b\y\4\w\s\c\3\7\j\c\8\v\n\m\6\u\o\e\u\e\e\e\8\o\g\x\j\q\3\g\y\7\5\i\m\r\v\z\g\s\e\g\x\9\u\r\5\i\o\2\k\w\d\q\5\q\c\f\l\y\p\g\o\7\q\q\9\q\p\g\g\r\9\0\i\k\g\5\d\8\i\6\m\j\l\9\v\p\b\c\m\w\b\2\e\q\g\h\0\1\a\z\2\h\r\7\q\9\s\m\k\4\i\e\r\c\b\c\8\3\4\9\g\g\v\a\0\j\e\g\b\j\f\d\6\q\c\z\g\s\q\z\p\k\g\d\p\a\w\b\r\y\u\n\i\c\g\m\w\u\y\h\d\7\x\z\k\w\m\6\x\0\6\g\k\p\1\4\a\q\6\m\k\0\8\9\n\s\u\s\m\g\j\l\p\o\n\m\6\2\x\r\z\r\r\2\k\f\f\r\h\2\h\l\6\6\c\1\4\a\o\w\o\k\8\7\t\z\k\y\9\w\b\k\j\l\p\j\p\h\4\e\a\8\3\y\q\g\c\j\v\f\9\0\8\g\t\e\z\s\z\1\i\4\4\3\6\m\4\7\2\6\z\o\u\x\s\w\4\s\x\1\4\0\w\k\5\r\8\u\l\m\p\l\9\k\7\n\b\a\g\s\n\m\v\1\u\9\3\n\6\1\1\y\8\0\6\y\t\z\0\b\t\u\u\u\u\7\6\a\2\3\k\9\w\u\s\y\h\n\s\6\w\e\6\y\w\4\k\c\n\k\i\w\8\i\w\2\l\a\p\t\r\i\0\c\t\r\x\4\0\g\m\4\9\0\1\k\s\a\k\f\4\4\c\p\o\r\e\x\a\o\q\9\0\3\x\x\n\m\0\1\9\r\v\o\2\4\t\i\k\2\u\3\o\8\b\h\v\p\y\1\6\t\i\b\2\x\s\y\4\u\5\v\0\c\m\f\6\p\g\s\3\m\4\z\e\u\z\m\n\6\l\s\y\2\s\q\0\7\y\o\f\i\j\n\o\6\2\4\0\0\8\a\i\b\k\x\j\q\i\y\u\l\7\r\c\p\f\2\j\u\5\l\1\e\o\x\0\9\8\r\5\5\t\d\h\s\4\j\n\z\2\j\3\a\0\3\1\b\l\5\p\8\f\z\g\m\l\e\k\i\9\a\p\v\2\n\c\u\b\2\5\o\0\4\k\8\f\o\l\t\k\9\u\t\e\s\2\8\h\9\r\6\2\c\u\a\e\v\h\e\l\9\3\7\q\8\9\p\d\q\y\x\0\j\7\p\4\w\4\9\g\a\9\v\a\4\h\m\h\w\t\9\1\e\s\m\d\f\v\c\o\1\p\p\m\0\n\m\6\o\x\9\6\s\3\y\e\a\1\f\i\k\b\z\o\q\m\9\i\q\m\s\i\u\g\2\l\x\t\4\3\3\b\g\f\6\u\i\e\w\e\8\m\9\7\0\1\b\a\s\9\l\5\l\p\n\5\s\l\9\1\3\i\s\w\w\2\y\q\8\1\i\f\t\n\t\9\c\8\x\y\t\g\w\g\a\5\a\k\y\y\x\9\x\q\9\c\u\0\p\b\t\0\m\1\s\i\f\u\h\8\t\r\b\s\z\o\v\3\4\9\r\a\c\b\5\t\i\b\6\a\x\3\s\h\h\i\j\k\p\b\f\a\y\2\3\o\i\9\n\i\8\6\x\2\1\b\f\1\5\5\h\z\5\4\a\2\2\z\f\a\9\1\j\u\a\i\l\0\j\g\v\6\r\j\n\y\l\3\7\4\s\d\y\9\v\k\n\l\0\l\1\x\m\f\e\f\w\5\9\9\q\1\s\0\f\u\q\l\k\m\c\c\6\q\9\h\g\q\d\k\x\o\7\3\0\2\i\d\3\1\r\8\k\2\p\t\p\t\4\i\a\8\l\l\g\d\g\6\2\r\0\i\z\b\a\8\t\b\p\q\1\n\r\q\a\j\j\6\x\v\k\z\h\8\u\c\1\7\x\z\j\5\3\p\1\o\w\f\p\b\y\q\q\p\c\2\f\0\1\h\t\r\2\k\p\4\s\i\e\t\j\k\2\d\q\m\1\0\x\w\m\s\n\8\h\y\w\n\i\i\s\3\f\j\z\u\z\u\b\0\m\t\r\k\k\k\z\e\6\e\z\w\c\4\0\c\o\y\x\1\x\5\m\r\o\l\z\t\y\u\s\m\9\k\g\v\h\b\3\g\3\n\l\b\d\1\a\i\o\u\w\r\5\l\r\q\w\8\m\l\t\4\h\7\t\4\r\u\a\7\z\1\z\s\1\j\k\2\m\a\2\o\m\b\m\s\d\z\e\z\6\k\2\o\j\v\p\5\t\z\l\r\2\k\r\t\3\x\i\t\k\o\g\c\f\e\a\b\v\k\o\c\r\k\o\l\q\3\d\5\f\k\l\1\5\4\y\3\u\o\n\n\5\k\5\8\d\n\w\e\o\5\8\k\4\7\7\6\2\p\2\f\j\i\t\8\h ]] 00:07:53.482 ************************************ 00:07:53.482 END TEST dd_rw_offset 00:07:53.482 ************************************ 00:07:53.482 00:07:53.482 real 0m1.279s 00:07:53.482 user 0m0.859s 00:07:53.482 sys 0m0.622s 00:07:53.482 09:35:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.482 09:35:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:53.482 09:35:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:53.482 09:35:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:53.482 09:35:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:53.482 09:35:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:53.482 09:35:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:53.482 09:35:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:53.482 09:35:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:53.482 09:35:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:53.482 09:35:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:53.482 09:35:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:53.482 09:35:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:53.482 [2024-11-19 09:35:41.085763] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:53.482 [2024-11-19 09:35:41.085875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60090 ] 00:07:53.482 { 00:07:53.482 "subsystems": [ 00:07:53.482 { 00:07:53.482 "subsystem": "bdev", 00:07:53.482 "config": [ 00:07:53.482 { 00:07:53.482 "params": { 00:07:53.482 "trtype": "pcie", 00:07:53.482 "traddr": "0000:00:10.0", 00:07:53.482 "name": "Nvme0" 00:07:53.482 }, 00:07:53.482 "method": "bdev_nvme_attach_controller" 00:07:53.482 }, 00:07:53.482 { 00:07:53.482 "method": "bdev_wait_for_examine" 00:07:53.482 } 00:07:53.482 ] 00:07:53.482 } 00:07:53.482 ] 00:07:53.482 } 00:07:53.741 [2024-11-19 09:35:41.230945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.741 [2024-11-19 09:35:41.280150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.741 [2024-11-19 09:35:41.338693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.000  [2024-11-19T09:35:41.882Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:54.259 00:07:54.259 09:35:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.259 00:07:54.259 real 0m18.038s 00:07:54.259 user 0m12.846s 00:07:54.259 sys 0m6.914s 00:07:54.259 09:35:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.259 09:35:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:54.259 ************************************ 00:07:54.259 END TEST spdk_dd_basic_rw 00:07:54.259 ************************************ 00:07:54.259 09:35:41 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:54.259 09:35:41 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.259 09:35:41 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.259 09:35:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:54.259 ************************************ 00:07:54.259 START TEST spdk_dd_posix 00:07:54.259 ************************************ 00:07:54.259 09:35:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:54.259 * Looking for test storage... 00:07:54.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:54.259 09:35:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:54.259 09:35:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:07:54.259 09:35:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:54.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.518 --rc genhtml_branch_coverage=1 00:07:54.518 --rc genhtml_function_coverage=1 00:07:54.518 --rc genhtml_legend=1 00:07:54.518 --rc geninfo_all_blocks=1 00:07:54.518 --rc geninfo_unexecuted_blocks=1 00:07:54.518 00:07:54.518 ' 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:54.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.518 --rc genhtml_branch_coverage=1 00:07:54.518 --rc genhtml_function_coverage=1 00:07:54.518 --rc genhtml_legend=1 00:07:54.518 --rc geninfo_all_blocks=1 00:07:54.518 --rc geninfo_unexecuted_blocks=1 00:07:54.518 00:07:54.518 ' 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:54.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.518 --rc genhtml_branch_coverage=1 00:07:54.518 --rc genhtml_function_coverage=1 00:07:54.518 --rc genhtml_legend=1 00:07:54.518 --rc geninfo_all_blocks=1 00:07:54.518 --rc geninfo_unexecuted_blocks=1 00:07:54.518 00:07:54.518 ' 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:54.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.518 --rc genhtml_branch_coverage=1 00:07:54.518 --rc genhtml_function_coverage=1 00:07:54.518 --rc genhtml_legend=1 00:07:54.518 --rc geninfo_all_blocks=1 00:07:54.518 --rc geninfo_unexecuted_blocks=1 00:07:54.518 00:07:54.518 ' 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:54.518 * First test run, liburing in use 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:54.518 ************************************ 00:07:54.518 START TEST dd_flag_append 00:07:54.518 ************************************ 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:54.518 09:35:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:54.519 09:35:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:54.519 09:35:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:54.519 09:35:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=cpfqmnbej1admk5iufkg318fvlxqv0ca 00:07:54.519 09:35:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:54.519 09:35:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:54.519 09:35:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:54.519 09:35:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=uprim8n7jkan4l8cumrj4i2x3bb7l717 00:07:54.519 09:35:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s cpfqmnbej1admk5iufkg318fvlxqv0ca 00:07:54.519 09:35:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s uprim8n7jkan4l8cumrj4i2x3bb7l717 00:07:54.519 09:35:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:54.519 [2024-11-19 09:35:41.983695] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:54.519 [2024-11-19 09:35:41.983832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60162 ] 00:07:54.519 [2024-11-19 09:35:42.135756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.780 [2024-11-19 09:35:42.209384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.780 [2024-11-19 09:35:42.268955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.780  [2024-11-19T09:35:42.663Z] Copying: 32/32 [B] (average 31 kBps) 00:07:55.040 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ uprim8n7jkan4l8cumrj4i2x3bb7l717cpfqmnbej1admk5iufkg318fvlxqv0ca == \u\p\r\i\m\8\n\7\j\k\a\n\4\l\8\c\u\m\r\j\4\i\2\x\3\b\b\7\l\7\1\7\c\p\f\q\m\n\b\e\j\1\a\d\m\k\5\i\u\f\k\g\3\1\8\f\v\l\x\q\v\0\c\a ]] 00:07:55.040 00:07:55.040 real 0m0.587s 00:07:55.040 user 0m0.317s 00:07:55.040 sys 0m0.299s 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.040 ************************************ 00:07:55.040 END TEST dd_flag_append 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:55.040 ************************************ 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:55.040 ************************************ 00:07:55.040 START TEST dd_flag_directory 00:07:55.040 ************************************ 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.040 09:35:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:55.040 [2024-11-19 09:35:42.614167] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:55.040 [2024-11-19 09:35:42.614288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60191 ] 00:07:55.299 [2024-11-19 09:35:42.763469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.299 [2024-11-19 09:35:42.824542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.299 [2024-11-19 09:35:42.879972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.299 [2024-11-19 09:35:42.915241] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:55.299 [2024-11-19 09:35:42.915284] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:55.299 [2024-11-19 09:35:42.915303] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:55.558 [2024-11-19 09:35:43.031373] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.558 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:55.558 [2024-11-19 09:35:43.153927] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:55.558 [2024-11-19 09:35:43.154019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60200 ] 00:07:55.817 [2024-11-19 09:35:43.301241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.817 [2024-11-19 09:35:43.361484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.817 [2024-11-19 09:35:43.418412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.076 [2024-11-19 09:35:43.458487] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:56.076 [2024-11-19 09:35:43.458556] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:56.076 [2024-11-19 09:35:43.458592] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.076 [2024-11-19 09:35:43.580599] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.076 00:07:56.076 real 0m1.091s 00:07:56.076 user 0m0.602s 00:07:56.076 sys 0m0.280s 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.076 ************************************ 00:07:56.076 END TEST dd_flag_directory 00:07:56.076 ************************************ 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:56.076 ************************************ 00:07:56.076 START TEST dd_flag_nofollow 00:07:56.076 ************************************ 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:56.076 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:56.337 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:56.337 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:56.337 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:56.337 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.337 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.337 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.337 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.337 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.337 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.337 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.337 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:56.337 09:35:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:56.337 [2024-11-19 09:35:43.748774] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:56.337 [2024-11-19 09:35:43.748884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60229 ] 00:07:56.337 [2024-11-19 09:35:43.891286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.337 [2024-11-19 09:35:43.958246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.597 [2024-11-19 09:35:44.014603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.597 [2024-11-19 09:35:44.054376] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:56.597 [2024-11-19 09:35:44.054462] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:56.597 [2024-11-19 09:35:44.054498] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.597 [2024-11-19 09:35:44.180160] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:56.857 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:56.857 [2024-11-19 09:35:44.320324] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:56.857 [2024-11-19 09:35:44.320488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60238 ] 00:07:56.857 [2024-11-19 09:35:44.471989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.117 [2024-11-19 09:35:44.541971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.117 [2024-11-19 09:35:44.600479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.117 [2024-11-19 09:35:44.638049] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:57.117 [2024-11-19 09:35:44.638118] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:57.117 [2024-11-19 09:35:44.638154] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:57.472 [2024-11-19 09:35:44.761898] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:57.472 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:57.472 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.472 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:57.472 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:57.472 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:57.472 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.472 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:57.472 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:57.472 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:57.472 09:35:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.472 [2024-11-19 09:35:44.894085] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:57.472 [2024-11-19 09:35:44.894236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60246 ] 00:07:57.472 [2024-11-19 09:35:45.046992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.732 [2024-11-19 09:35:45.111078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.732 [2024-11-19 09:35:45.170310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.732  [2024-11-19T09:35:45.615Z] Copying: 512/512 [B] (average 500 kBps) 00:07:57.992 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 07k6hyczd3q6yzsg9nokyx28ptt312hcv2ba7pbs4yvwcehhv8neaj9fj3yp2dz5hpydfboiu3uthg9rt1uamzl2rp9fvo7v3szt5wyb9hhq5bnh6po2f4q8ngmqr087dyg8oe4jxrgc7dtbwmy4m9snhu2ckvpz9qt7vm25r0ys7pe5io2mn7u7czhlj8bmdxwiq9vvz24v76bgqux84l5294ecqa70a6tkcn3b4dsj8p8nmbbnprw6icwrunt51wib0455voog81y0fubpwkxx931xtgb4wp7t0w91ydjwp8nal34ovdab29jhtpm84v5621ipsl2z0p1vx3gyydl0rqct4u951drvpvumxn6nms959pxy3o1gral0t3cf3soklndagt2gfc0iim16jz9nj8v7ngx8zv8qk69k1l68wbwtkhf625mryzv4kmaaopzvhzi5zd3hh7a8jmud5h6p92409t1h5mew3q6qvukc09bwc5tn404uu44o9q6u == \0\7\k\6\h\y\c\z\d\3\q\6\y\z\s\g\9\n\o\k\y\x\2\8\p\t\t\3\1\2\h\c\v\2\b\a\7\p\b\s\4\y\v\w\c\e\h\h\v\8\n\e\a\j\9\f\j\3\y\p\2\d\z\5\h\p\y\d\f\b\o\i\u\3\u\t\h\g\9\r\t\1\u\a\m\z\l\2\r\p\9\f\v\o\7\v\3\s\z\t\5\w\y\b\9\h\h\q\5\b\n\h\6\p\o\2\f\4\q\8\n\g\m\q\r\0\8\7\d\y\g\8\o\e\4\j\x\r\g\c\7\d\t\b\w\m\y\4\m\9\s\n\h\u\2\c\k\v\p\z\9\q\t\7\v\m\2\5\r\0\y\s\7\p\e\5\i\o\2\m\n\7\u\7\c\z\h\l\j\8\b\m\d\x\w\i\q\9\v\v\z\2\4\v\7\6\b\g\q\u\x\8\4\l\5\2\9\4\e\c\q\a\7\0\a\6\t\k\c\n\3\b\4\d\s\j\8\p\8\n\m\b\b\n\p\r\w\6\i\c\w\r\u\n\t\5\1\w\i\b\0\4\5\5\v\o\o\g\8\1\y\0\f\u\b\p\w\k\x\x\9\3\1\x\t\g\b\4\w\p\7\t\0\w\9\1\y\d\j\w\p\8\n\a\l\3\4\o\v\d\a\b\2\9\j\h\t\p\m\8\4\v\5\6\2\1\i\p\s\l\2\z\0\p\1\v\x\3\g\y\y\d\l\0\r\q\c\t\4\u\9\5\1\d\r\v\p\v\u\m\x\n\6\n\m\s\9\5\9\p\x\y\3\o\1\g\r\a\l\0\t\3\c\f\3\s\o\k\l\n\d\a\g\t\2\g\f\c\0\i\i\m\1\6\j\z\9\n\j\8\v\7\n\g\x\8\z\v\8\q\k\6\9\k\1\l\6\8\w\b\w\t\k\h\f\6\2\5\m\r\y\z\v\4\k\m\a\a\o\p\z\v\h\z\i\5\z\d\3\h\h\7\a\8\j\m\u\d\5\h\6\p\9\2\4\0\9\t\1\h\5\m\e\w\3\q\6\q\v\u\k\c\0\9\b\w\c\5\t\n\4\0\4\u\u\4\4\o\9\q\6\u ]] 00:07:57.992 00:07:57.992 real 0m1.703s 00:07:57.992 user 0m0.948s 00:07:57.992 sys 0m0.566s 00:07:57.992 ************************************ 00:07:57.992 END TEST dd_flag_nofollow 00:07:57.992 ************************************ 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:57.992 ************************************ 00:07:57.992 START TEST dd_flag_noatime 00:07:57.992 ************************************ 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732008945 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732008945 00:07:57.992 09:35:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:58.930 09:35:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.930 [2024-11-19 09:35:46.526116] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:58.930 [2024-11-19 09:35:46.526201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60290 ] 00:07:59.189 [2024-11-19 09:35:46.675623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.189 [2024-11-19 09:35:46.739310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.189 [2024-11-19 09:35:46.799843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.448  [2024-11-19T09:35:47.071Z] Copying: 512/512 [B] (average 500 kBps) 00:07:59.448 00:07:59.448 09:35:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:59.448 09:35:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732008945 )) 00:07:59.448 09:35:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.448 09:35:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732008945 )) 00:07:59.448 09:35:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.707 [2024-11-19 09:35:47.080484] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:07:59.707 [2024-11-19 09:35:47.080561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60304 ] 00:07:59.707 [2024-11-19 09:35:47.224999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.707 [2024-11-19 09:35:47.278690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.966 [2024-11-19 09:35:47.335468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.966  [2024-11-19T09:35:47.589Z] Copying: 512/512 [B] (average 500 kBps) 00:07:59.966 00:07:59.966 09:35:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:59.966 09:35:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732008947 )) 00:07:59.966 00:07:59.966 real 0m2.107s 00:07:59.966 user 0m0.599s 00:07:59.966 sys 0m0.559s 00:07:59.966 09:35:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.966 09:35:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:59.966 ************************************ 00:07:59.966 END TEST dd_flag_noatime 00:07:59.966 ************************************ 00:08:00.225 09:35:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:00.225 09:35:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.225 09:35:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.225 09:35:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:00.225 ************************************ 00:08:00.225 START TEST dd_flags_misc 00:08:00.225 ************************************ 00:08:00.225 09:35:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:08:00.225 09:35:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:00.225 09:35:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:00.225 09:35:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:00.225 09:35:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:00.225 09:35:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:00.225 09:35:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:00.225 09:35:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:00.225 09:35:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:00.225 09:35:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:00.225 [2024-11-19 09:35:47.680298] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:00.225 [2024-11-19 09:35:47.680390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60338 ] 00:08:00.225 [2024-11-19 09:35:47.827620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.485 [2024-11-19 09:35:47.887501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.485 [2024-11-19 09:35:47.946139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.485  [2024-11-19T09:35:48.367Z] Copying: 512/512 [B] (average 500 kBps) 00:08:00.744 00:08:00.744 09:35:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cfgzi5r35znnnc9k023pki32849j6pe0ra06d9hs35ft1ks4fqugc5r8kcjdjvwg4u8f3lhevhrxbrt5hgf0jxead9jfqnxff09p48sbok21i94halmja2at8z7ljiict6o7rkqqwajmcxqg5le4zbjuy0xl99f1g8y8kn5mpr1swvoi7ysndixcbew4eqgsdq5iiuxewsftuvklsjcwezu1xn6tlhc96wjk5fqprv70t0sf5y7a639nlhc28p1tb6tpxt05y55jhil8mhsxyrclf9ytx3uavc1xb33lxew29ogejt8r8zhz0agtjetm6ld6lqvksk5j9gsdqiblaxrqvw9y9o0o0pd229429wx5qmcx3pz9hqmot7s7sr6vnpq0f5gjwwrufhbc3ltvvw7ko0efx6dyn0wy3r83zbxz2ussb4u9a7i6egh8yeu8jlozm2edf7pgzt35vhldzk8idxx0eghhiutfp2ereul97r167qbimhhjluofxt1y == \c\f\g\z\i\5\r\3\5\z\n\n\n\c\9\k\0\2\3\p\k\i\3\2\8\4\9\j\6\p\e\0\r\a\0\6\d\9\h\s\3\5\f\t\1\k\s\4\f\q\u\g\c\5\r\8\k\c\j\d\j\v\w\g\4\u\8\f\3\l\h\e\v\h\r\x\b\r\t\5\h\g\f\0\j\x\e\a\d\9\j\f\q\n\x\f\f\0\9\p\4\8\s\b\o\k\2\1\i\9\4\h\a\l\m\j\a\2\a\t\8\z\7\l\j\i\i\c\t\6\o\7\r\k\q\q\w\a\j\m\c\x\q\g\5\l\e\4\z\b\j\u\y\0\x\l\9\9\f\1\g\8\y\8\k\n\5\m\p\r\1\s\w\v\o\i\7\y\s\n\d\i\x\c\b\e\w\4\e\q\g\s\d\q\5\i\i\u\x\e\w\s\f\t\u\v\k\l\s\j\c\w\e\z\u\1\x\n\6\t\l\h\c\9\6\w\j\k\5\f\q\p\r\v\7\0\t\0\s\f\5\y\7\a\6\3\9\n\l\h\c\2\8\p\1\t\b\6\t\p\x\t\0\5\y\5\5\j\h\i\l\8\m\h\s\x\y\r\c\l\f\9\y\t\x\3\u\a\v\c\1\x\b\3\3\l\x\e\w\2\9\o\g\e\j\t\8\r\8\z\h\z\0\a\g\t\j\e\t\m\6\l\d\6\l\q\v\k\s\k\5\j\9\g\s\d\q\i\b\l\a\x\r\q\v\w\9\y\9\o\0\o\0\p\d\2\2\9\4\2\9\w\x\5\q\m\c\x\3\p\z\9\h\q\m\o\t\7\s\7\s\r\6\v\n\p\q\0\f\5\g\j\w\w\r\u\f\h\b\c\3\l\t\v\v\w\7\k\o\0\e\f\x\6\d\y\n\0\w\y\3\r\8\3\z\b\x\z\2\u\s\s\b\4\u\9\a\7\i\6\e\g\h\8\y\e\u\8\j\l\o\z\m\2\e\d\f\7\p\g\z\t\3\5\v\h\l\d\z\k\8\i\d\x\x\0\e\g\h\h\i\u\t\f\p\2\e\r\e\u\l\9\7\r\1\6\7\q\b\i\m\h\h\j\l\u\o\f\x\t\1\y ]] 00:08:00.744 09:35:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:00.744 09:35:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:00.744 [2024-11-19 09:35:48.230367] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:00.744 [2024-11-19 09:35:48.230483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60347 ] 00:08:01.004 [2024-11-19 09:35:48.380080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.004 [2024-11-19 09:35:48.438502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.004 [2024-11-19 09:35:48.494752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.004  [2024-11-19T09:35:48.886Z] Copying: 512/512 [B] (average 500 kBps) 00:08:01.263 00:08:01.263 09:35:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cfgzi5r35znnnc9k023pki32849j6pe0ra06d9hs35ft1ks4fqugc5r8kcjdjvwg4u8f3lhevhrxbrt5hgf0jxead9jfqnxff09p48sbok21i94halmja2at8z7ljiict6o7rkqqwajmcxqg5le4zbjuy0xl99f1g8y8kn5mpr1swvoi7ysndixcbew4eqgsdq5iiuxewsftuvklsjcwezu1xn6tlhc96wjk5fqprv70t0sf5y7a639nlhc28p1tb6tpxt05y55jhil8mhsxyrclf9ytx3uavc1xb33lxew29ogejt8r8zhz0agtjetm6ld6lqvksk5j9gsdqiblaxrqvw9y9o0o0pd229429wx5qmcx3pz9hqmot7s7sr6vnpq0f5gjwwrufhbc3ltvvw7ko0efx6dyn0wy3r83zbxz2ussb4u9a7i6egh8yeu8jlozm2edf7pgzt35vhldzk8idxx0eghhiutfp2ereul97r167qbimhhjluofxt1y == \c\f\g\z\i\5\r\3\5\z\n\n\n\c\9\k\0\2\3\p\k\i\3\2\8\4\9\j\6\p\e\0\r\a\0\6\d\9\h\s\3\5\f\t\1\k\s\4\f\q\u\g\c\5\r\8\k\c\j\d\j\v\w\g\4\u\8\f\3\l\h\e\v\h\r\x\b\r\t\5\h\g\f\0\j\x\e\a\d\9\j\f\q\n\x\f\f\0\9\p\4\8\s\b\o\k\2\1\i\9\4\h\a\l\m\j\a\2\a\t\8\z\7\l\j\i\i\c\t\6\o\7\r\k\q\q\w\a\j\m\c\x\q\g\5\l\e\4\z\b\j\u\y\0\x\l\9\9\f\1\g\8\y\8\k\n\5\m\p\r\1\s\w\v\o\i\7\y\s\n\d\i\x\c\b\e\w\4\e\q\g\s\d\q\5\i\i\u\x\e\w\s\f\t\u\v\k\l\s\j\c\w\e\z\u\1\x\n\6\t\l\h\c\9\6\w\j\k\5\f\q\p\r\v\7\0\t\0\s\f\5\y\7\a\6\3\9\n\l\h\c\2\8\p\1\t\b\6\t\p\x\t\0\5\y\5\5\j\h\i\l\8\m\h\s\x\y\r\c\l\f\9\y\t\x\3\u\a\v\c\1\x\b\3\3\l\x\e\w\2\9\o\g\e\j\t\8\r\8\z\h\z\0\a\g\t\j\e\t\m\6\l\d\6\l\q\v\k\s\k\5\j\9\g\s\d\q\i\b\l\a\x\r\q\v\w\9\y\9\o\0\o\0\p\d\2\2\9\4\2\9\w\x\5\q\m\c\x\3\p\z\9\h\q\m\o\t\7\s\7\s\r\6\v\n\p\q\0\f\5\g\j\w\w\r\u\f\h\b\c\3\l\t\v\v\w\7\k\o\0\e\f\x\6\d\y\n\0\w\y\3\r\8\3\z\b\x\z\2\u\s\s\b\4\u\9\a\7\i\6\e\g\h\8\y\e\u\8\j\l\o\z\m\2\e\d\f\7\p\g\z\t\3\5\v\h\l\d\z\k\8\i\d\x\x\0\e\g\h\h\i\u\t\f\p\2\e\r\e\u\l\9\7\r\1\6\7\q\b\i\m\h\h\j\l\u\o\f\x\t\1\y ]] 00:08:01.263 09:35:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:01.263 09:35:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:01.263 [2024-11-19 09:35:48.759063] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:01.263 [2024-11-19 09:35:48.759167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60357 ] 00:08:01.522 [2024-11-19 09:35:48.908841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.522 [2024-11-19 09:35:48.976017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.522 [2024-11-19 09:35:49.031850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.522  [2024-11-19T09:35:49.402Z] Copying: 512/512 [B] (average 100 kBps) 00:08:01.779 00:08:01.779 09:35:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cfgzi5r35znnnc9k023pki32849j6pe0ra06d9hs35ft1ks4fqugc5r8kcjdjvwg4u8f3lhevhrxbrt5hgf0jxead9jfqnxff09p48sbok21i94halmja2at8z7ljiict6o7rkqqwajmcxqg5le4zbjuy0xl99f1g8y8kn5mpr1swvoi7ysndixcbew4eqgsdq5iiuxewsftuvklsjcwezu1xn6tlhc96wjk5fqprv70t0sf5y7a639nlhc28p1tb6tpxt05y55jhil8mhsxyrclf9ytx3uavc1xb33lxew29ogejt8r8zhz0agtjetm6ld6lqvksk5j9gsdqiblaxrqvw9y9o0o0pd229429wx5qmcx3pz9hqmot7s7sr6vnpq0f5gjwwrufhbc3ltvvw7ko0efx6dyn0wy3r83zbxz2ussb4u9a7i6egh8yeu8jlozm2edf7pgzt35vhldzk8idxx0eghhiutfp2ereul97r167qbimhhjluofxt1y == \c\f\g\z\i\5\r\3\5\z\n\n\n\c\9\k\0\2\3\p\k\i\3\2\8\4\9\j\6\p\e\0\r\a\0\6\d\9\h\s\3\5\f\t\1\k\s\4\f\q\u\g\c\5\r\8\k\c\j\d\j\v\w\g\4\u\8\f\3\l\h\e\v\h\r\x\b\r\t\5\h\g\f\0\j\x\e\a\d\9\j\f\q\n\x\f\f\0\9\p\4\8\s\b\o\k\2\1\i\9\4\h\a\l\m\j\a\2\a\t\8\z\7\l\j\i\i\c\t\6\o\7\r\k\q\q\w\a\j\m\c\x\q\g\5\l\e\4\z\b\j\u\y\0\x\l\9\9\f\1\g\8\y\8\k\n\5\m\p\r\1\s\w\v\o\i\7\y\s\n\d\i\x\c\b\e\w\4\e\q\g\s\d\q\5\i\i\u\x\e\w\s\f\t\u\v\k\l\s\j\c\w\e\z\u\1\x\n\6\t\l\h\c\9\6\w\j\k\5\f\q\p\r\v\7\0\t\0\s\f\5\y\7\a\6\3\9\n\l\h\c\2\8\p\1\t\b\6\t\p\x\t\0\5\y\5\5\j\h\i\l\8\m\h\s\x\y\r\c\l\f\9\y\t\x\3\u\a\v\c\1\x\b\3\3\l\x\e\w\2\9\o\g\e\j\t\8\r\8\z\h\z\0\a\g\t\j\e\t\m\6\l\d\6\l\q\v\k\s\k\5\j\9\g\s\d\q\i\b\l\a\x\r\q\v\w\9\y\9\o\0\o\0\p\d\2\2\9\4\2\9\w\x\5\q\m\c\x\3\p\z\9\h\q\m\o\t\7\s\7\s\r\6\v\n\p\q\0\f\5\g\j\w\w\r\u\f\h\b\c\3\l\t\v\v\w\7\k\o\0\e\f\x\6\d\y\n\0\w\y\3\r\8\3\z\b\x\z\2\u\s\s\b\4\u\9\a\7\i\6\e\g\h\8\y\e\u\8\j\l\o\z\m\2\e\d\f\7\p\g\z\t\3\5\v\h\l\d\z\k\8\i\d\x\x\0\e\g\h\h\i\u\t\f\p\2\e\r\e\u\l\9\7\r\1\6\7\q\b\i\m\h\h\j\l\u\o\f\x\t\1\y ]] 00:08:01.779 09:35:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:01.779 09:35:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:01.779 [2024-11-19 09:35:49.314598] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:01.779 [2024-11-19 09:35:49.314756] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60366 ] 00:08:02.037 [2024-11-19 09:35:49.462080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.037 [2024-11-19 09:35:49.514602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.037 [2024-11-19 09:35:49.569027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.037  [2024-11-19T09:35:49.919Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.296 00:08:02.296 09:35:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cfgzi5r35znnnc9k023pki32849j6pe0ra06d9hs35ft1ks4fqugc5r8kcjdjvwg4u8f3lhevhrxbrt5hgf0jxead9jfqnxff09p48sbok21i94halmja2at8z7ljiict6o7rkqqwajmcxqg5le4zbjuy0xl99f1g8y8kn5mpr1swvoi7ysndixcbew4eqgsdq5iiuxewsftuvklsjcwezu1xn6tlhc96wjk5fqprv70t0sf5y7a639nlhc28p1tb6tpxt05y55jhil8mhsxyrclf9ytx3uavc1xb33lxew29ogejt8r8zhz0agtjetm6ld6lqvksk5j9gsdqiblaxrqvw9y9o0o0pd229429wx5qmcx3pz9hqmot7s7sr6vnpq0f5gjwwrufhbc3ltvvw7ko0efx6dyn0wy3r83zbxz2ussb4u9a7i6egh8yeu8jlozm2edf7pgzt35vhldzk8idxx0eghhiutfp2ereul97r167qbimhhjluofxt1y == \c\f\g\z\i\5\r\3\5\z\n\n\n\c\9\k\0\2\3\p\k\i\3\2\8\4\9\j\6\p\e\0\r\a\0\6\d\9\h\s\3\5\f\t\1\k\s\4\f\q\u\g\c\5\r\8\k\c\j\d\j\v\w\g\4\u\8\f\3\l\h\e\v\h\r\x\b\r\t\5\h\g\f\0\j\x\e\a\d\9\j\f\q\n\x\f\f\0\9\p\4\8\s\b\o\k\2\1\i\9\4\h\a\l\m\j\a\2\a\t\8\z\7\l\j\i\i\c\t\6\o\7\r\k\q\q\w\a\j\m\c\x\q\g\5\l\e\4\z\b\j\u\y\0\x\l\9\9\f\1\g\8\y\8\k\n\5\m\p\r\1\s\w\v\o\i\7\y\s\n\d\i\x\c\b\e\w\4\e\q\g\s\d\q\5\i\i\u\x\e\w\s\f\t\u\v\k\l\s\j\c\w\e\z\u\1\x\n\6\t\l\h\c\9\6\w\j\k\5\f\q\p\r\v\7\0\t\0\s\f\5\y\7\a\6\3\9\n\l\h\c\2\8\p\1\t\b\6\t\p\x\t\0\5\y\5\5\j\h\i\l\8\m\h\s\x\y\r\c\l\f\9\y\t\x\3\u\a\v\c\1\x\b\3\3\l\x\e\w\2\9\o\g\e\j\t\8\r\8\z\h\z\0\a\g\t\j\e\t\m\6\l\d\6\l\q\v\k\s\k\5\j\9\g\s\d\q\i\b\l\a\x\r\q\v\w\9\y\9\o\0\o\0\p\d\2\2\9\4\2\9\w\x\5\q\m\c\x\3\p\z\9\h\q\m\o\t\7\s\7\s\r\6\v\n\p\q\0\f\5\g\j\w\w\r\u\f\h\b\c\3\l\t\v\v\w\7\k\o\0\e\f\x\6\d\y\n\0\w\y\3\r\8\3\z\b\x\z\2\u\s\s\b\4\u\9\a\7\i\6\e\g\h\8\y\e\u\8\j\l\o\z\m\2\e\d\f\7\p\g\z\t\3\5\v\h\l\d\z\k\8\i\d\x\x\0\e\g\h\h\i\u\t\f\p\2\e\r\e\u\l\9\7\r\1\6\7\q\b\i\m\h\h\j\l\u\o\f\x\t\1\y ]] 00:08:02.296 09:35:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:02.296 09:35:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:02.296 09:35:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:02.296 09:35:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:02.296 09:35:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.296 09:35:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:02.296 [2024-11-19 09:35:49.845321] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:02.296 [2024-11-19 09:35:49.845413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60376 ] 00:08:02.555 [2024-11-19 09:35:49.991436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.555 [2024-11-19 09:35:50.047527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.555 [2024-11-19 09:35:50.104812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.555  [2024-11-19T09:35:50.437Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.814 00:08:02.814 09:35:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fdv987io59hz0dovteqc2tx1mu2k4zjfuip18mm6wsoe8y8vk6qd3v18zsi1tljukb0nffr4e7ux0a3005fni1nfnv7jyhgfjkeck7ccclzq69m8dujcbo0jamd10mxp5x4azs28sde19if3r8epjw8b80ojqna7qoyp2abwgrfya9888zpd2u3e9b9go3nw539sqqf4t4i5k1scfv6fxxftz5y4sm3yfkd9g3es3pyptym3xoyxgrxf4tf4ksnebrid35lk5xy9twgy05yb83ng16g9y5l8xl9nhy196escd0g6ce9bq4rn2j6gono5qyfrk9394aihpjtspf59wzwgea5eqbp9rkyahgdik48k2xydi841hmg7fbmipfsgyosiknfhmifuigell763in4q1hukd7xmwzm7n9pge2dli02ny4nizzvc1mh2f97ioezpf8lqfs2uezp625ozcm8p48d3dmvadqaeb6o452unus216dkh0ofc2gz62yn2 == \f\d\v\9\8\7\i\o\5\9\h\z\0\d\o\v\t\e\q\c\2\t\x\1\m\u\2\k\4\z\j\f\u\i\p\1\8\m\m\6\w\s\o\e\8\y\8\v\k\6\q\d\3\v\1\8\z\s\i\1\t\l\j\u\k\b\0\n\f\f\r\4\e\7\u\x\0\a\3\0\0\5\f\n\i\1\n\f\n\v\7\j\y\h\g\f\j\k\e\c\k\7\c\c\c\l\z\q\6\9\m\8\d\u\j\c\b\o\0\j\a\m\d\1\0\m\x\p\5\x\4\a\z\s\2\8\s\d\e\1\9\i\f\3\r\8\e\p\j\w\8\b\8\0\o\j\q\n\a\7\q\o\y\p\2\a\b\w\g\r\f\y\a\9\8\8\8\z\p\d\2\u\3\e\9\b\9\g\o\3\n\w\5\3\9\s\q\q\f\4\t\4\i\5\k\1\s\c\f\v\6\f\x\x\f\t\z\5\y\4\s\m\3\y\f\k\d\9\g\3\e\s\3\p\y\p\t\y\m\3\x\o\y\x\g\r\x\f\4\t\f\4\k\s\n\e\b\r\i\d\3\5\l\k\5\x\y\9\t\w\g\y\0\5\y\b\8\3\n\g\1\6\g\9\y\5\l\8\x\l\9\n\h\y\1\9\6\e\s\c\d\0\g\6\c\e\9\b\q\4\r\n\2\j\6\g\o\n\o\5\q\y\f\r\k\9\3\9\4\a\i\h\p\j\t\s\p\f\5\9\w\z\w\g\e\a\5\e\q\b\p\9\r\k\y\a\h\g\d\i\k\4\8\k\2\x\y\d\i\8\4\1\h\m\g\7\f\b\m\i\p\f\s\g\y\o\s\i\k\n\f\h\m\i\f\u\i\g\e\l\l\7\6\3\i\n\4\q\1\h\u\k\d\7\x\m\w\z\m\7\n\9\p\g\e\2\d\l\i\0\2\n\y\4\n\i\z\z\v\c\1\m\h\2\f\9\7\i\o\e\z\p\f\8\l\q\f\s\2\u\e\z\p\6\2\5\o\z\c\m\8\p\4\8\d\3\d\m\v\a\d\q\a\e\b\6\o\4\5\2\u\n\u\s\2\1\6\d\k\h\0\o\f\c\2\g\z\6\2\y\n\2 ]] 00:08:02.814 09:35:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.814 09:35:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:02.814 [2024-11-19 09:35:50.377300] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:02.814 [2024-11-19 09:35:50.377393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60385 ] 00:08:03.073 [2024-11-19 09:35:50.522823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.073 [2024-11-19 09:35:50.577098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.073 [2024-11-19 09:35:50.633322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.073  [2024-11-19T09:35:50.956Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.333 00:08:03.333 09:35:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fdv987io59hz0dovteqc2tx1mu2k4zjfuip18mm6wsoe8y8vk6qd3v18zsi1tljukb0nffr4e7ux0a3005fni1nfnv7jyhgfjkeck7ccclzq69m8dujcbo0jamd10mxp5x4azs28sde19if3r8epjw8b80ojqna7qoyp2abwgrfya9888zpd2u3e9b9go3nw539sqqf4t4i5k1scfv6fxxftz5y4sm3yfkd9g3es3pyptym3xoyxgrxf4tf4ksnebrid35lk5xy9twgy05yb83ng16g9y5l8xl9nhy196escd0g6ce9bq4rn2j6gono5qyfrk9394aihpjtspf59wzwgea5eqbp9rkyahgdik48k2xydi841hmg7fbmipfsgyosiknfhmifuigell763in4q1hukd7xmwzm7n9pge2dli02ny4nizzvc1mh2f97ioezpf8lqfs2uezp625ozcm8p48d3dmvadqaeb6o452unus216dkh0ofc2gz62yn2 == \f\d\v\9\8\7\i\o\5\9\h\z\0\d\o\v\t\e\q\c\2\t\x\1\m\u\2\k\4\z\j\f\u\i\p\1\8\m\m\6\w\s\o\e\8\y\8\v\k\6\q\d\3\v\1\8\z\s\i\1\t\l\j\u\k\b\0\n\f\f\r\4\e\7\u\x\0\a\3\0\0\5\f\n\i\1\n\f\n\v\7\j\y\h\g\f\j\k\e\c\k\7\c\c\c\l\z\q\6\9\m\8\d\u\j\c\b\o\0\j\a\m\d\1\0\m\x\p\5\x\4\a\z\s\2\8\s\d\e\1\9\i\f\3\r\8\e\p\j\w\8\b\8\0\o\j\q\n\a\7\q\o\y\p\2\a\b\w\g\r\f\y\a\9\8\8\8\z\p\d\2\u\3\e\9\b\9\g\o\3\n\w\5\3\9\s\q\q\f\4\t\4\i\5\k\1\s\c\f\v\6\f\x\x\f\t\z\5\y\4\s\m\3\y\f\k\d\9\g\3\e\s\3\p\y\p\t\y\m\3\x\o\y\x\g\r\x\f\4\t\f\4\k\s\n\e\b\r\i\d\3\5\l\k\5\x\y\9\t\w\g\y\0\5\y\b\8\3\n\g\1\6\g\9\y\5\l\8\x\l\9\n\h\y\1\9\6\e\s\c\d\0\g\6\c\e\9\b\q\4\r\n\2\j\6\g\o\n\o\5\q\y\f\r\k\9\3\9\4\a\i\h\p\j\t\s\p\f\5\9\w\z\w\g\e\a\5\e\q\b\p\9\r\k\y\a\h\g\d\i\k\4\8\k\2\x\y\d\i\8\4\1\h\m\g\7\f\b\m\i\p\f\s\g\y\o\s\i\k\n\f\h\m\i\f\u\i\g\e\l\l\7\6\3\i\n\4\q\1\h\u\k\d\7\x\m\w\z\m\7\n\9\p\g\e\2\d\l\i\0\2\n\y\4\n\i\z\z\v\c\1\m\h\2\f\9\7\i\o\e\z\p\f\8\l\q\f\s\2\u\e\z\p\6\2\5\o\z\c\m\8\p\4\8\d\3\d\m\v\a\d\q\a\e\b\6\o\4\5\2\u\n\u\s\2\1\6\d\k\h\0\o\f\c\2\g\z\6\2\y\n\2 ]] 00:08:03.333 09:35:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.333 09:35:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:03.333 [2024-11-19 09:35:50.906013] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:03.333 [2024-11-19 09:35:50.906330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60395 ] 00:08:03.592 [2024-11-19 09:35:51.055725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.592 [2024-11-19 09:35:51.107297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.592 [2024-11-19 09:35:51.162581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.592  [2024-11-19T09:35:51.474Z] Copying: 512/512 [B] (average 250 kBps) 00:08:03.851 00:08:03.851 09:35:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fdv987io59hz0dovteqc2tx1mu2k4zjfuip18mm6wsoe8y8vk6qd3v18zsi1tljukb0nffr4e7ux0a3005fni1nfnv7jyhgfjkeck7ccclzq69m8dujcbo0jamd10mxp5x4azs28sde19if3r8epjw8b80ojqna7qoyp2abwgrfya9888zpd2u3e9b9go3nw539sqqf4t4i5k1scfv6fxxftz5y4sm3yfkd9g3es3pyptym3xoyxgrxf4tf4ksnebrid35lk5xy9twgy05yb83ng16g9y5l8xl9nhy196escd0g6ce9bq4rn2j6gono5qyfrk9394aihpjtspf59wzwgea5eqbp9rkyahgdik48k2xydi841hmg7fbmipfsgyosiknfhmifuigell763in4q1hukd7xmwzm7n9pge2dli02ny4nizzvc1mh2f97ioezpf8lqfs2uezp625ozcm8p48d3dmvadqaeb6o452unus216dkh0ofc2gz62yn2 == \f\d\v\9\8\7\i\o\5\9\h\z\0\d\o\v\t\e\q\c\2\t\x\1\m\u\2\k\4\z\j\f\u\i\p\1\8\m\m\6\w\s\o\e\8\y\8\v\k\6\q\d\3\v\1\8\z\s\i\1\t\l\j\u\k\b\0\n\f\f\r\4\e\7\u\x\0\a\3\0\0\5\f\n\i\1\n\f\n\v\7\j\y\h\g\f\j\k\e\c\k\7\c\c\c\l\z\q\6\9\m\8\d\u\j\c\b\o\0\j\a\m\d\1\0\m\x\p\5\x\4\a\z\s\2\8\s\d\e\1\9\i\f\3\r\8\e\p\j\w\8\b\8\0\o\j\q\n\a\7\q\o\y\p\2\a\b\w\g\r\f\y\a\9\8\8\8\z\p\d\2\u\3\e\9\b\9\g\o\3\n\w\5\3\9\s\q\q\f\4\t\4\i\5\k\1\s\c\f\v\6\f\x\x\f\t\z\5\y\4\s\m\3\y\f\k\d\9\g\3\e\s\3\p\y\p\t\y\m\3\x\o\y\x\g\r\x\f\4\t\f\4\k\s\n\e\b\r\i\d\3\5\l\k\5\x\y\9\t\w\g\y\0\5\y\b\8\3\n\g\1\6\g\9\y\5\l\8\x\l\9\n\h\y\1\9\6\e\s\c\d\0\g\6\c\e\9\b\q\4\r\n\2\j\6\g\o\n\o\5\q\y\f\r\k\9\3\9\4\a\i\h\p\j\t\s\p\f\5\9\w\z\w\g\e\a\5\e\q\b\p\9\r\k\y\a\h\g\d\i\k\4\8\k\2\x\y\d\i\8\4\1\h\m\g\7\f\b\m\i\p\f\s\g\y\o\s\i\k\n\f\h\m\i\f\u\i\g\e\l\l\7\6\3\i\n\4\q\1\h\u\k\d\7\x\m\w\z\m\7\n\9\p\g\e\2\d\l\i\0\2\n\y\4\n\i\z\z\v\c\1\m\h\2\f\9\7\i\o\e\z\p\f\8\l\q\f\s\2\u\e\z\p\6\2\5\o\z\c\m\8\p\4\8\d\3\d\m\v\a\d\q\a\e\b\6\o\4\5\2\u\n\u\s\2\1\6\d\k\h\0\o\f\c\2\g\z\6\2\y\n\2 ]] 00:08:03.851 09:35:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.851 09:35:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:03.851 [2024-11-19 09:35:51.433008] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:03.851 [2024-11-19 09:35:51.433108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60404 ] 00:08:04.110 [2024-11-19 09:35:51.574324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.110 [2024-11-19 09:35:51.633835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.110 [2024-11-19 09:35:51.688054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.110  [2024-11-19T09:35:51.992Z] Copying: 512/512 [B] (average 500 kBps) 00:08:04.369 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fdv987io59hz0dovteqc2tx1mu2k4zjfuip18mm6wsoe8y8vk6qd3v18zsi1tljukb0nffr4e7ux0a3005fni1nfnv7jyhgfjkeck7ccclzq69m8dujcbo0jamd10mxp5x4azs28sde19if3r8epjw8b80ojqna7qoyp2abwgrfya9888zpd2u3e9b9go3nw539sqqf4t4i5k1scfv6fxxftz5y4sm3yfkd9g3es3pyptym3xoyxgrxf4tf4ksnebrid35lk5xy9twgy05yb83ng16g9y5l8xl9nhy196escd0g6ce9bq4rn2j6gono5qyfrk9394aihpjtspf59wzwgea5eqbp9rkyahgdik48k2xydi841hmg7fbmipfsgyosiknfhmifuigell763in4q1hukd7xmwzm7n9pge2dli02ny4nizzvc1mh2f97ioezpf8lqfs2uezp625ozcm8p48d3dmvadqaeb6o452unus216dkh0ofc2gz62yn2 == \f\d\v\9\8\7\i\o\5\9\h\z\0\d\o\v\t\e\q\c\2\t\x\1\m\u\2\k\4\z\j\f\u\i\p\1\8\m\m\6\w\s\o\e\8\y\8\v\k\6\q\d\3\v\1\8\z\s\i\1\t\l\j\u\k\b\0\n\f\f\r\4\e\7\u\x\0\a\3\0\0\5\f\n\i\1\n\f\n\v\7\j\y\h\g\f\j\k\e\c\k\7\c\c\c\l\z\q\6\9\m\8\d\u\j\c\b\o\0\j\a\m\d\1\0\m\x\p\5\x\4\a\z\s\2\8\s\d\e\1\9\i\f\3\r\8\e\p\j\w\8\b\8\0\o\j\q\n\a\7\q\o\y\p\2\a\b\w\g\r\f\y\a\9\8\8\8\z\p\d\2\u\3\e\9\b\9\g\o\3\n\w\5\3\9\s\q\q\f\4\t\4\i\5\k\1\s\c\f\v\6\f\x\x\f\t\z\5\y\4\s\m\3\y\f\k\d\9\g\3\e\s\3\p\y\p\t\y\m\3\x\o\y\x\g\r\x\f\4\t\f\4\k\s\n\e\b\r\i\d\3\5\l\k\5\x\y\9\t\w\g\y\0\5\y\b\8\3\n\g\1\6\g\9\y\5\l\8\x\l\9\n\h\y\1\9\6\e\s\c\d\0\g\6\c\e\9\b\q\4\r\n\2\j\6\g\o\n\o\5\q\y\f\r\k\9\3\9\4\a\i\h\p\j\t\s\p\f\5\9\w\z\w\g\e\a\5\e\q\b\p\9\r\k\y\a\h\g\d\i\k\4\8\k\2\x\y\d\i\8\4\1\h\m\g\7\f\b\m\i\p\f\s\g\y\o\s\i\k\n\f\h\m\i\f\u\i\g\e\l\l\7\6\3\i\n\4\q\1\h\u\k\d\7\x\m\w\z\m\7\n\9\p\g\e\2\d\l\i\0\2\n\y\4\n\i\z\z\v\c\1\m\h\2\f\9\7\i\o\e\z\p\f\8\l\q\f\s\2\u\e\z\p\6\2\5\o\z\c\m\8\p\4\8\d\3\d\m\v\a\d\q\a\e\b\6\o\4\5\2\u\n\u\s\2\1\6\d\k\h\0\o\f\c\2\g\z\6\2\y\n\2 ]] 00:08:04.369 00:08:04.369 real 0m4.294s 00:08:04.369 user 0m2.308s 00:08:04.369 sys 0m2.192s 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:04.369 ************************************ 00:08:04.369 END TEST dd_flags_misc 00:08:04.369 ************************************ 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:04.369 * Second test run, disabling liburing, forcing AIO 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:04.369 ************************************ 00:08:04.369 START TEST dd_flag_append_forced_aio 00:08:04.369 ************************************ 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=g83p8l3rb6nb62xt81twtai5o7tcllwl 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=1ikwzd9pdjz0r3e0ufwxj19qhz2ecms7 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s g83p8l3rb6nb62xt81twtai5o7tcllwl 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 1ikwzd9pdjz0r3e0ufwxj19qhz2ecms7 00:08:04.369 09:35:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:04.629 [2024-11-19 09:35:52.020550] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:04.629 [2024-11-19 09:35:52.020629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60433 ] 00:08:04.629 [2024-11-19 09:35:52.169121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.629 [2024-11-19 09:35:52.223829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.888 [2024-11-19 09:35:52.282771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.888  [2024-11-19T09:35:52.770Z] Copying: 32/32 [B] (average 31 kBps) 00:08:05.147 00:08:05.147 ************************************ 00:08:05.147 END TEST dd_flag_append_forced_aio 00:08:05.147 ************************************ 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 1ikwzd9pdjz0r3e0ufwxj19qhz2ecms7g83p8l3rb6nb62xt81twtai5o7tcllwl == \1\i\k\w\z\d\9\p\d\j\z\0\r\3\e\0\u\f\w\x\j\1\9\q\h\z\2\e\c\m\s\7\g\8\3\p\8\l\3\r\b\6\n\b\6\2\x\t\8\1\t\w\t\a\i\5\o\7\t\c\l\l\w\l ]] 00:08:05.147 00:08:05.147 real 0m0.569s 00:08:05.147 user 0m0.309s 00:08:05.147 sys 0m0.140s 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:05.147 ************************************ 00:08:05.147 START TEST dd_flag_directory_forced_aio 00:08:05.147 ************************************ 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.147 09:35:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.147 [2024-11-19 09:35:52.633690] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:05.147 [2024-11-19 09:35:52.633794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60459 ] 00:08:05.406 [2024-11-19 09:35:52.783585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.406 [2024-11-19 09:35:52.841522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.406 [2024-11-19 09:35:52.900123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.406 [2024-11-19 09:35:52.936374] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.406 [2024-11-19 09:35:52.936641] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.406 [2024-11-19 09:35:52.936669] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.666 [2024-11-19 09:35:53.054316] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.666 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.666 [2024-11-19 09:35:53.168982] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:05.666 [2024-11-19 09:35:53.169333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60471 ] 00:08:05.925 [2024-11-19 09:35:53.308033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.925 [2024-11-19 09:35:53.356973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.925 [2024-11-19 09:35:53.412617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.925 [2024-11-19 09:35:53.448711] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.925 [2024-11-19 09:35:53.448974] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.925 [2024-11-19 09:35:53.448999] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.189 [2024-11-19 09:35:53.568891] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:06.189 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:06.189 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:06.189 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:06.189 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:06.189 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:06.190 00:08:06.190 real 0m1.057s 00:08:06.190 user 0m0.575s 00:08:06.190 sys 0m0.270s 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.190 ************************************ 00:08:06.190 END TEST dd_flag_directory_forced_aio 00:08:06.190 ************************************ 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:06.190 ************************************ 00:08:06.190 START TEST dd_flag_nofollow_forced_aio 00:08:06.190 ************************************ 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.190 09:35:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.190 [2024-11-19 09:35:53.749260] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:06.190 [2024-11-19 09:35:53.749578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60505 ] 00:08:06.449 [2024-11-19 09:35:53.890438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.449 [2024-11-19 09:35:53.950789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.449 [2024-11-19 09:35:54.009929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.449 [2024-11-19 09:35:54.047924] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:06.449 [2024-11-19 09:35:54.047984] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:06.449 [2024-11-19 09:35:54.048005] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.708 [2024-11-19 09:35:54.168111] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.708 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:06.708 [2024-11-19 09:35:54.282166] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:06.708 [2024-11-19 09:35:54.282452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60509 ] 00:08:06.967 [2024-11-19 09:35:54.422788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.967 [2024-11-19 09:35:54.488626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.967 [2024-11-19 09:35:54.544811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.967 [2024-11-19 09:35:54.581878] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:06.967 [2024-11-19 09:35:54.581935] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:06.967 [2024-11-19 09:35:54.581957] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.226 [2024-11-19 09:35:54.705092] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:07.226 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:07.226 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:07.226 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:07.226 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:07.226 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:07.226 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:07.226 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:07.226 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:07.226 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:07.226 09:35:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.226 [2024-11-19 09:35:54.827162] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:07.226 [2024-11-19 09:35:54.827311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60522 ] 00:08:07.485 [2024-11-19 09:35:54.970157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.485 [2024-11-19 09:35:55.034594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.485 [2024-11-19 09:35:55.092502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.744  [2024-11-19T09:35:55.367Z] Copying: 512/512 [B] (average 500 kBps) 00:08:07.744 00:08:07.744 09:35:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 7t6jihu2ng3os2ixy36vsp7f1wp3ozfln9eyftuq5z88posuhoalvy5uz5tx8clx5ilf7znkgvka5r8q2fazmumqbgw1txov3stunhe287fmljanyn8jukq8bw5wzrsicnkolrkyle3b7mwoy9ieuyw7a0lw7k25l6du5bsjqnjirw5u2g5pfxh753a5meq47mmu84qei4bt7sa76lh9fus43u5sq2qxwa66zppx5bf9si6afvjlqwbd7jqq5k9szkwm202k5fkc3718z9n6lcsyfvp15ipc6zqw7q67id09nvs3k2fj0mcewbncukeysujtkdp91cuwvxyodffcf2jsp2owttwlbitpg3c8u3z1ucaiz9owljsze40wnct3ajip01th9vd3tjmjsbgqsw5pv583vctrd3kt8yqjbf7aai7p6t0vvynu1h2cmuxluyqo5wwm0swv28ttpis8q16uiaegv20prz66uk3q9btff1axsc3g8mpsliggwpea == \7\t\6\j\i\h\u\2\n\g\3\o\s\2\i\x\y\3\6\v\s\p\7\f\1\w\p\3\o\z\f\l\n\9\e\y\f\t\u\q\5\z\8\8\p\o\s\u\h\o\a\l\v\y\5\u\z\5\t\x\8\c\l\x\5\i\l\f\7\z\n\k\g\v\k\a\5\r\8\q\2\f\a\z\m\u\m\q\b\g\w\1\t\x\o\v\3\s\t\u\n\h\e\2\8\7\f\m\l\j\a\n\y\n\8\j\u\k\q\8\b\w\5\w\z\r\s\i\c\n\k\o\l\r\k\y\l\e\3\b\7\m\w\o\y\9\i\e\u\y\w\7\a\0\l\w\7\k\2\5\l\6\d\u\5\b\s\j\q\n\j\i\r\w\5\u\2\g\5\p\f\x\h\7\5\3\a\5\m\e\q\4\7\m\m\u\8\4\q\e\i\4\b\t\7\s\a\7\6\l\h\9\f\u\s\4\3\u\5\s\q\2\q\x\w\a\6\6\z\p\p\x\5\b\f\9\s\i\6\a\f\v\j\l\q\w\b\d\7\j\q\q\5\k\9\s\z\k\w\m\2\0\2\k\5\f\k\c\3\7\1\8\z\9\n\6\l\c\s\y\f\v\p\1\5\i\p\c\6\z\q\w\7\q\6\7\i\d\0\9\n\v\s\3\k\2\f\j\0\m\c\e\w\b\n\c\u\k\e\y\s\u\j\t\k\d\p\9\1\c\u\w\v\x\y\o\d\f\f\c\f\2\j\s\p\2\o\w\t\t\w\l\b\i\t\p\g\3\c\8\u\3\z\1\u\c\a\i\z\9\o\w\l\j\s\z\e\4\0\w\n\c\t\3\a\j\i\p\0\1\t\h\9\v\d\3\t\j\m\j\s\b\g\q\s\w\5\p\v\5\8\3\v\c\t\r\d\3\k\t\8\y\q\j\b\f\7\a\a\i\7\p\6\t\0\v\v\y\n\u\1\h\2\c\m\u\x\l\u\y\q\o\5\w\w\m\0\s\w\v\2\8\t\t\p\i\s\8\q\1\6\u\i\a\e\g\v\2\0\p\r\z\6\6\u\k\3\q\9\b\t\f\f\1\a\x\s\c\3\g\8\m\p\s\l\i\g\g\w\p\e\a ]] 00:08:07.744 00:08:07.744 real 0m1.662s 00:08:07.744 user 0m0.891s 00:08:07.744 sys 0m0.439s 00:08:07.744 09:35:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.744 09:35:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:07.744 ************************************ 00:08:07.744 END TEST dd_flag_nofollow_forced_aio 00:08:07.744 ************************************ 00:08:08.003 09:35:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:08.003 09:35:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.003 09:35:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.003 09:35:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:08.003 ************************************ 00:08:08.003 START TEST dd_flag_noatime_forced_aio 00:08:08.003 ************************************ 00:08:08.003 09:35:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:08.003 09:35:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:08.003 09:35:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:08.003 09:35:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:08.003 09:35:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:08.003 09:35:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:08.003 09:35:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:08.003 09:35:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732008955 00:08:08.004 09:35:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.004 09:35:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732008955 00:08:08.004 09:35:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:08.938 09:35:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.938 [2024-11-19 09:35:56.487816] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:08.938 [2024-11-19 09:35:56.488096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60557 ] 00:08:09.197 [2024-11-19 09:35:56.636146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.197 [2024-11-19 09:35:56.693647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.197 [2024-11-19 09:35:56.750344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.197  [2024-11-19T09:35:57.079Z] Copying: 512/512 [B] (average 500 kBps) 00:08:09.456 00:08:09.456 09:35:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:09.456 09:35:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732008955 )) 00:08:09.456 09:35:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.456 09:35:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732008955 )) 00:08:09.456 09:35:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.456 [2024-11-19 09:35:57.057052] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:09.456 [2024-11-19 09:35:57.057159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60574 ] 00:08:09.715 [2024-11-19 09:35:57.206562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.715 [2024-11-19 09:35:57.262699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.715 [2024-11-19 09:35:57.324332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.975  [2024-11-19T09:35:57.598Z] Copying: 512/512 [B] (average 500 kBps) 00:08:09.975 00:08:09.975 09:35:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:09.975 09:35:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732008957 )) 00:08:09.975 00:08:09.975 real 0m2.189s 00:08:09.975 user 0m0.636s 00:08:09.975 sys 0m0.309s 00:08:09.975 09:35:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.975 09:35:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:09.975 ************************************ 00:08:09.975 END TEST dd_flag_noatime_forced_aio 00:08:09.975 ************************************ 00:08:10.234 09:35:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:10.234 09:35:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.234 09:35:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.234 09:35:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:10.234 ************************************ 00:08:10.234 START TEST dd_flags_misc_forced_aio 00:08:10.234 ************************************ 00:08:10.234 09:35:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:10.234 09:35:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:10.234 09:35:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:10.234 09:35:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:10.234 09:35:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:10.234 09:35:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:10.234 09:35:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:10.234 09:35:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:10.234 09:35:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:10.234 09:35:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:10.234 [2024-11-19 09:35:57.707581] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:10.234 [2024-11-19 09:35:57.707673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60600 ] 00:08:10.493 [2024-11-19 09:35:57.861956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.494 [2024-11-19 09:35:57.926992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.494 [2024-11-19 09:35:57.990541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.494  [2024-11-19T09:35:58.375Z] Copying: 512/512 [B] (average 500 kBps) 00:08:10.752 00:08:10.752 09:35:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iyt8cuu08adq4oqnj9mfa53hxh898tgxoojvwozca07kgn8fei3g9im1nnkup52qkx4gntyfcaax815nsxi13s1i47zugjpsvdovh3bc1u7zklzegjknpxpue7pc48c47hdyf2rfznfbsbuf3pm97c3ew7jchuuz78d9fvzbl23whduf6aewzmzsww69h9n52fkl3v2vvwf2kbn87wilbq75a5wtsv33sb5lc7ll8ixe77ze2bi877z53sl4eoytltjt9v3ofr9u3b3d44rv8zqtlf9f0xrbtria2uy4c5xv0aqzfsf0vidj03dso73tl30u4ulif1gff84wilnwre8t9939k1lzpwiib8wwdx3b9pur9vn4aynj1fyu30pjfxbgyyexuyf9qe1xl1pptljl0ovqgkj7117vka6ajsv225lz50wl7wx3cdwku1b4r0tppu6gckn40evvkemuai50fiup04hd2yjjm6g0soycx40luy19nq232gmld0km == \i\y\t\8\c\u\u\0\8\a\d\q\4\o\q\n\j\9\m\f\a\5\3\h\x\h\8\9\8\t\g\x\o\o\j\v\w\o\z\c\a\0\7\k\g\n\8\f\e\i\3\g\9\i\m\1\n\n\k\u\p\5\2\q\k\x\4\g\n\t\y\f\c\a\a\x\8\1\5\n\s\x\i\1\3\s\1\i\4\7\z\u\g\j\p\s\v\d\o\v\h\3\b\c\1\u\7\z\k\l\z\e\g\j\k\n\p\x\p\u\e\7\p\c\4\8\c\4\7\h\d\y\f\2\r\f\z\n\f\b\s\b\u\f\3\p\m\9\7\c\3\e\w\7\j\c\h\u\u\z\7\8\d\9\f\v\z\b\l\2\3\w\h\d\u\f\6\a\e\w\z\m\z\s\w\w\6\9\h\9\n\5\2\f\k\l\3\v\2\v\v\w\f\2\k\b\n\8\7\w\i\l\b\q\7\5\a\5\w\t\s\v\3\3\s\b\5\l\c\7\l\l\8\i\x\e\7\7\z\e\2\b\i\8\7\7\z\5\3\s\l\4\e\o\y\t\l\t\j\t\9\v\3\o\f\r\9\u\3\b\3\d\4\4\r\v\8\z\q\t\l\f\9\f\0\x\r\b\t\r\i\a\2\u\y\4\c\5\x\v\0\a\q\z\f\s\f\0\v\i\d\j\0\3\d\s\o\7\3\t\l\3\0\u\4\u\l\i\f\1\g\f\f\8\4\w\i\l\n\w\r\e\8\t\9\9\3\9\k\1\l\z\p\w\i\i\b\8\w\w\d\x\3\b\9\p\u\r\9\v\n\4\a\y\n\j\1\f\y\u\3\0\p\j\f\x\b\g\y\y\e\x\u\y\f\9\q\e\1\x\l\1\p\p\t\l\j\l\0\o\v\q\g\k\j\7\1\1\7\v\k\a\6\a\j\s\v\2\2\5\l\z\5\0\w\l\7\w\x\3\c\d\w\k\u\1\b\4\r\0\t\p\p\u\6\g\c\k\n\4\0\e\v\v\k\e\m\u\a\i\5\0\f\i\u\p\0\4\h\d\2\y\j\j\m\6\g\0\s\o\y\c\x\4\0\l\u\y\1\9\n\q\2\3\2\g\m\l\d\0\k\m ]] 00:08:10.752 09:35:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:10.752 09:35:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:10.752 [2024-11-19 09:35:58.306152] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:10.752 [2024-11-19 09:35:58.306268] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60608 ] 00:08:11.012 [2024-11-19 09:35:58.455420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.012 [2024-11-19 09:35:58.510184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.012 [2024-11-19 09:35:58.565165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.012  [2024-11-19T09:35:58.894Z] Copying: 512/512 [B] (average 500 kBps) 00:08:11.271 00:08:11.271 09:35:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iyt8cuu08adq4oqnj9mfa53hxh898tgxoojvwozca07kgn8fei3g9im1nnkup52qkx4gntyfcaax815nsxi13s1i47zugjpsvdovh3bc1u7zklzegjknpxpue7pc48c47hdyf2rfznfbsbuf3pm97c3ew7jchuuz78d9fvzbl23whduf6aewzmzsww69h9n52fkl3v2vvwf2kbn87wilbq75a5wtsv33sb5lc7ll8ixe77ze2bi877z53sl4eoytltjt9v3ofr9u3b3d44rv8zqtlf9f0xrbtria2uy4c5xv0aqzfsf0vidj03dso73tl30u4ulif1gff84wilnwre8t9939k1lzpwiib8wwdx3b9pur9vn4aynj1fyu30pjfxbgyyexuyf9qe1xl1pptljl0ovqgkj7117vka6ajsv225lz50wl7wx3cdwku1b4r0tppu6gckn40evvkemuai50fiup04hd2yjjm6g0soycx40luy19nq232gmld0km == \i\y\t\8\c\u\u\0\8\a\d\q\4\o\q\n\j\9\m\f\a\5\3\h\x\h\8\9\8\t\g\x\o\o\j\v\w\o\z\c\a\0\7\k\g\n\8\f\e\i\3\g\9\i\m\1\n\n\k\u\p\5\2\q\k\x\4\g\n\t\y\f\c\a\a\x\8\1\5\n\s\x\i\1\3\s\1\i\4\7\z\u\g\j\p\s\v\d\o\v\h\3\b\c\1\u\7\z\k\l\z\e\g\j\k\n\p\x\p\u\e\7\p\c\4\8\c\4\7\h\d\y\f\2\r\f\z\n\f\b\s\b\u\f\3\p\m\9\7\c\3\e\w\7\j\c\h\u\u\z\7\8\d\9\f\v\z\b\l\2\3\w\h\d\u\f\6\a\e\w\z\m\z\s\w\w\6\9\h\9\n\5\2\f\k\l\3\v\2\v\v\w\f\2\k\b\n\8\7\w\i\l\b\q\7\5\a\5\w\t\s\v\3\3\s\b\5\l\c\7\l\l\8\i\x\e\7\7\z\e\2\b\i\8\7\7\z\5\3\s\l\4\e\o\y\t\l\t\j\t\9\v\3\o\f\r\9\u\3\b\3\d\4\4\r\v\8\z\q\t\l\f\9\f\0\x\r\b\t\r\i\a\2\u\y\4\c\5\x\v\0\a\q\z\f\s\f\0\v\i\d\j\0\3\d\s\o\7\3\t\l\3\0\u\4\u\l\i\f\1\g\f\f\8\4\w\i\l\n\w\r\e\8\t\9\9\3\9\k\1\l\z\p\w\i\i\b\8\w\w\d\x\3\b\9\p\u\r\9\v\n\4\a\y\n\j\1\f\y\u\3\0\p\j\f\x\b\g\y\y\e\x\u\y\f\9\q\e\1\x\l\1\p\p\t\l\j\l\0\o\v\q\g\k\j\7\1\1\7\v\k\a\6\a\j\s\v\2\2\5\l\z\5\0\w\l\7\w\x\3\c\d\w\k\u\1\b\4\r\0\t\p\p\u\6\g\c\k\n\4\0\e\v\v\k\e\m\u\a\i\5\0\f\i\u\p\0\4\h\d\2\y\j\j\m\6\g\0\s\o\y\c\x\4\0\l\u\y\1\9\n\q\2\3\2\g\m\l\d\0\k\m ]] 00:08:11.271 09:35:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:11.271 09:35:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:11.271 [2024-11-19 09:35:58.879043] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:11.271 [2024-11-19 09:35:58.879149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60621 ] 00:08:11.530 [2024-11-19 09:35:59.033083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.530 [2024-11-19 09:35:59.096797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.825 [2024-11-19 09:35:59.156982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.825  [2024-11-19T09:35:59.448Z] Copying: 512/512 [B] (average 125 kBps) 00:08:11.825 00:08:11.825 09:35:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iyt8cuu08adq4oqnj9mfa53hxh898tgxoojvwozca07kgn8fei3g9im1nnkup52qkx4gntyfcaax815nsxi13s1i47zugjpsvdovh3bc1u7zklzegjknpxpue7pc48c47hdyf2rfznfbsbuf3pm97c3ew7jchuuz78d9fvzbl23whduf6aewzmzsww69h9n52fkl3v2vvwf2kbn87wilbq75a5wtsv33sb5lc7ll8ixe77ze2bi877z53sl4eoytltjt9v3ofr9u3b3d44rv8zqtlf9f0xrbtria2uy4c5xv0aqzfsf0vidj03dso73tl30u4ulif1gff84wilnwre8t9939k1lzpwiib8wwdx3b9pur9vn4aynj1fyu30pjfxbgyyexuyf9qe1xl1pptljl0ovqgkj7117vka6ajsv225lz50wl7wx3cdwku1b4r0tppu6gckn40evvkemuai50fiup04hd2yjjm6g0soycx40luy19nq232gmld0km == \i\y\t\8\c\u\u\0\8\a\d\q\4\o\q\n\j\9\m\f\a\5\3\h\x\h\8\9\8\t\g\x\o\o\j\v\w\o\z\c\a\0\7\k\g\n\8\f\e\i\3\g\9\i\m\1\n\n\k\u\p\5\2\q\k\x\4\g\n\t\y\f\c\a\a\x\8\1\5\n\s\x\i\1\3\s\1\i\4\7\z\u\g\j\p\s\v\d\o\v\h\3\b\c\1\u\7\z\k\l\z\e\g\j\k\n\p\x\p\u\e\7\p\c\4\8\c\4\7\h\d\y\f\2\r\f\z\n\f\b\s\b\u\f\3\p\m\9\7\c\3\e\w\7\j\c\h\u\u\z\7\8\d\9\f\v\z\b\l\2\3\w\h\d\u\f\6\a\e\w\z\m\z\s\w\w\6\9\h\9\n\5\2\f\k\l\3\v\2\v\v\w\f\2\k\b\n\8\7\w\i\l\b\q\7\5\a\5\w\t\s\v\3\3\s\b\5\l\c\7\l\l\8\i\x\e\7\7\z\e\2\b\i\8\7\7\z\5\3\s\l\4\e\o\y\t\l\t\j\t\9\v\3\o\f\r\9\u\3\b\3\d\4\4\r\v\8\z\q\t\l\f\9\f\0\x\r\b\t\r\i\a\2\u\y\4\c\5\x\v\0\a\q\z\f\s\f\0\v\i\d\j\0\3\d\s\o\7\3\t\l\3\0\u\4\u\l\i\f\1\g\f\f\8\4\w\i\l\n\w\r\e\8\t\9\9\3\9\k\1\l\z\p\w\i\i\b\8\w\w\d\x\3\b\9\p\u\r\9\v\n\4\a\y\n\j\1\f\y\u\3\0\p\j\f\x\b\g\y\y\e\x\u\y\f\9\q\e\1\x\l\1\p\p\t\l\j\l\0\o\v\q\g\k\j\7\1\1\7\v\k\a\6\a\j\s\v\2\2\5\l\z\5\0\w\l\7\w\x\3\c\d\w\k\u\1\b\4\r\0\t\p\p\u\6\g\c\k\n\4\0\e\v\v\k\e\m\u\a\i\5\0\f\i\u\p\0\4\h\d\2\y\j\j\m\6\g\0\s\o\y\c\x\4\0\l\u\y\1\9\n\q\2\3\2\g\m\l\d\0\k\m ]] 00:08:11.825 09:35:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:11.825 09:35:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:12.082 [2024-11-19 09:35:59.459946] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:12.082 [2024-11-19 09:35:59.460048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60623 ] 00:08:12.082 [2024-11-19 09:35:59.607339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.083 [2024-11-19 09:35:59.662496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.342 [2024-11-19 09:35:59.717773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.342  [2024-11-19T09:35:59.965Z] Copying: 512/512 [B] (average 500 kBps) 00:08:12.342 00:08:12.600 09:35:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iyt8cuu08adq4oqnj9mfa53hxh898tgxoojvwozca07kgn8fei3g9im1nnkup52qkx4gntyfcaax815nsxi13s1i47zugjpsvdovh3bc1u7zklzegjknpxpue7pc48c47hdyf2rfznfbsbuf3pm97c3ew7jchuuz78d9fvzbl23whduf6aewzmzsww69h9n52fkl3v2vvwf2kbn87wilbq75a5wtsv33sb5lc7ll8ixe77ze2bi877z53sl4eoytltjt9v3ofr9u3b3d44rv8zqtlf9f0xrbtria2uy4c5xv0aqzfsf0vidj03dso73tl30u4ulif1gff84wilnwre8t9939k1lzpwiib8wwdx3b9pur9vn4aynj1fyu30pjfxbgyyexuyf9qe1xl1pptljl0ovqgkj7117vka6ajsv225lz50wl7wx3cdwku1b4r0tppu6gckn40evvkemuai50fiup04hd2yjjm6g0soycx40luy19nq232gmld0km == \i\y\t\8\c\u\u\0\8\a\d\q\4\o\q\n\j\9\m\f\a\5\3\h\x\h\8\9\8\t\g\x\o\o\j\v\w\o\z\c\a\0\7\k\g\n\8\f\e\i\3\g\9\i\m\1\n\n\k\u\p\5\2\q\k\x\4\g\n\t\y\f\c\a\a\x\8\1\5\n\s\x\i\1\3\s\1\i\4\7\z\u\g\j\p\s\v\d\o\v\h\3\b\c\1\u\7\z\k\l\z\e\g\j\k\n\p\x\p\u\e\7\p\c\4\8\c\4\7\h\d\y\f\2\r\f\z\n\f\b\s\b\u\f\3\p\m\9\7\c\3\e\w\7\j\c\h\u\u\z\7\8\d\9\f\v\z\b\l\2\3\w\h\d\u\f\6\a\e\w\z\m\z\s\w\w\6\9\h\9\n\5\2\f\k\l\3\v\2\v\v\w\f\2\k\b\n\8\7\w\i\l\b\q\7\5\a\5\w\t\s\v\3\3\s\b\5\l\c\7\l\l\8\i\x\e\7\7\z\e\2\b\i\8\7\7\z\5\3\s\l\4\e\o\y\t\l\t\j\t\9\v\3\o\f\r\9\u\3\b\3\d\4\4\r\v\8\z\q\t\l\f\9\f\0\x\r\b\t\r\i\a\2\u\y\4\c\5\x\v\0\a\q\z\f\s\f\0\v\i\d\j\0\3\d\s\o\7\3\t\l\3\0\u\4\u\l\i\f\1\g\f\f\8\4\w\i\l\n\w\r\e\8\t\9\9\3\9\k\1\l\z\p\w\i\i\b\8\w\w\d\x\3\b\9\p\u\r\9\v\n\4\a\y\n\j\1\f\y\u\3\0\p\j\f\x\b\g\y\y\e\x\u\y\f\9\q\e\1\x\l\1\p\p\t\l\j\l\0\o\v\q\g\k\j\7\1\1\7\v\k\a\6\a\j\s\v\2\2\5\l\z\5\0\w\l\7\w\x\3\c\d\w\k\u\1\b\4\r\0\t\p\p\u\6\g\c\k\n\4\0\e\v\v\k\e\m\u\a\i\5\0\f\i\u\p\0\4\h\d\2\y\j\j\m\6\g\0\s\o\y\c\x\4\0\l\u\y\1\9\n\q\2\3\2\g\m\l\d\0\k\m ]] 00:08:12.600 09:35:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:12.600 09:35:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:12.600 09:35:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:12.600 09:35:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:12.600 09:35:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:12.600 09:35:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:12.600 [2024-11-19 09:36:00.039080] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:12.600 [2024-11-19 09:36:00.039328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60636 ] 00:08:12.600 [2024-11-19 09:36:00.188168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.859 [2024-11-19 09:36:00.247831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.859 [2024-11-19 09:36:00.310694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.859  [2024-11-19T09:36:00.740Z] Copying: 512/512 [B] (average 500 kBps) 00:08:13.117 00:08:13.117 09:36:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ u6ul0u5ppqk3uhdw50laahoafz2iue8govvt3fmopmowrqbdstlkupg0sey976r8am9kbfwslwdvl6rq41q2pntn3lc7wzeed469jyvybe7zzse9odeny0igkdcvo2rwc2vy5c09nmf7j4ogxu4dzn5gkovs3wm6ujzexen6r9zk0is2pycpk54p92cfm3wr079t8mfsl3oi98pdhsb8uzc8lbws38g9wmd1b47zx66q7qgja975zk89kcoaci49tmg6shi73eo1s3q3xuau93knizjhsa23c416c444o9e5nj511te46g9mjvyvodp6zoyck2qxiz6ymnff59bz2s07hhw3uqvlgcv4am97wl9ek4i0ujz4kcl0utl3fe9x4vkyvsgcl8v0ghn377mfv70jzfrg6gqmcagd3rav30fa5hgcejxhovazq46oqovhfrgp0ie2slf6vengvrldop3vcjfvq5upwzf5ux8deeimcc58mtax9ijh6dg8zm8g == \u\6\u\l\0\u\5\p\p\q\k\3\u\h\d\w\5\0\l\a\a\h\o\a\f\z\2\i\u\e\8\g\o\v\v\t\3\f\m\o\p\m\o\w\r\q\b\d\s\t\l\k\u\p\g\0\s\e\y\9\7\6\r\8\a\m\9\k\b\f\w\s\l\w\d\v\l\6\r\q\4\1\q\2\p\n\t\n\3\l\c\7\w\z\e\e\d\4\6\9\j\y\v\y\b\e\7\z\z\s\e\9\o\d\e\n\y\0\i\g\k\d\c\v\o\2\r\w\c\2\v\y\5\c\0\9\n\m\f\7\j\4\o\g\x\u\4\d\z\n\5\g\k\o\v\s\3\w\m\6\u\j\z\e\x\e\n\6\r\9\z\k\0\i\s\2\p\y\c\p\k\5\4\p\9\2\c\f\m\3\w\r\0\7\9\t\8\m\f\s\l\3\o\i\9\8\p\d\h\s\b\8\u\z\c\8\l\b\w\s\3\8\g\9\w\m\d\1\b\4\7\z\x\6\6\q\7\q\g\j\a\9\7\5\z\k\8\9\k\c\o\a\c\i\4\9\t\m\g\6\s\h\i\7\3\e\o\1\s\3\q\3\x\u\a\u\9\3\k\n\i\z\j\h\s\a\2\3\c\4\1\6\c\4\4\4\o\9\e\5\n\j\5\1\1\t\e\4\6\g\9\m\j\v\y\v\o\d\p\6\z\o\y\c\k\2\q\x\i\z\6\y\m\n\f\f\5\9\b\z\2\s\0\7\h\h\w\3\u\q\v\l\g\c\v\4\a\m\9\7\w\l\9\e\k\4\i\0\u\j\z\4\k\c\l\0\u\t\l\3\f\e\9\x\4\v\k\y\v\s\g\c\l\8\v\0\g\h\n\3\7\7\m\f\v\7\0\j\z\f\r\g\6\g\q\m\c\a\g\d\3\r\a\v\3\0\f\a\5\h\g\c\e\j\x\h\o\v\a\z\q\4\6\o\q\o\v\h\f\r\g\p\0\i\e\2\s\l\f\6\v\e\n\g\v\r\l\d\o\p\3\v\c\j\f\v\q\5\u\p\w\z\f\5\u\x\8\d\e\e\i\m\c\c\5\8\m\t\a\x\9\i\j\h\6\d\g\8\z\m\8\g ]] 00:08:13.117 09:36:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:13.118 09:36:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:13.118 [2024-11-19 09:36:00.653725] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:13.118 [2024-11-19 09:36:00.654068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60638 ] 00:08:13.377 [2024-11-19 09:36:00.806589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.377 [2024-11-19 09:36:00.877400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.377 [2024-11-19 09:36:00.939051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.377  [2024-11-19T09:36:01.259Z] Copying: 512/512 [B] (average 500 kBps) 00:08:13.636 00:08:13.636 09:36:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ u6ul0u5ppqk3uhdw50laahoafz2iue8govvt3fmopmowrqbdstlkupg0sey976r8am9kbfwslwdvl6rq41q2pntn3lc7wzeed469jyvybe7zzse9odeny0igkdcvo2rwc2vy5c09nmf7j4ogxu4dzn5gkovs3wm6ujzexen6r9zk0is2pycpk54p92cfm3wr079t8mfsl3oi98pdhsb8uzc8lbws38g9wmd1b47zx66q7qgja975zk89kcoaci49tmg6shi73eo1s3q3xuau93knizjhsa23c416c444o9e5nj511te46g9mjvyvodp6zoyck2qxiz6ymnff59bz2s07hhw3uqvlgcv4am97wl9ek4i0ujz4kcl0utl3fe9x4vkyvsgcl8v0ghn377mfv70jzfrg6gqmcagd3rav30fa5hgcejxhovazq46oqovhfrgp0ie2slf6vengvrldop3vcjfvq5upwzf5ux8deeimcc58mtax9ijh6dg8zm8g == \u\6\u\l\0\u\5\p\p\q\k\3\u\h\d\w\5\0\l\a\a\h\o\a\f\z\2\i\u\e\8\g\o\v\v\t\3\f\m\o\p\m\o\w\r\q\b\d\s\t\l\k\u\p\g\0\s\e\y\9\7\6\r\8\a\m\9\k\b\f\w\s\l\w\d\v\l\6\r\q\4\1\q\2\p\n\t\n\3\l\c\7\w\z\e\e\d\4\6\9\j\y\v\y\b\e\7\z\z\s\e\9\o\d\e\n\y\0\i\g\k\d\c\v\o\2\r\w\c\2\v\y\5\c\0\9\n\m\f\7\j\4\o\g\x\u\4\d\z\n\5\g\k\o\v\s\3\w\m\6\u\j\z\e\x\e\n\6\r\9\z\k\0\i\s\2\p\y\c\p\k\5\4\p\9\2\c\f\m\3\w\r\0\7\9\t\8\m\f\s\l\3\o\i\9\8\p\d\h\s\b\8\u\z\c\8\l\b\w\s\3\8\g\9\w\m\d\1\b\4\7\z\x\6\6\q\7\q\g\j\a\9\7\5\z\k\8\9\k\c\o\a\c\i\4\9\t\m\g\6\s\h\i\7\3\e\o\1\s\3\q\3\x\u\a\u\9\3\k\n\i\z\j\h\s\a\2\3\c\4\1\6\c\4\4\4\o\9\e\5\n\j\5\1\1\t\e\4\6\g\9\m\j\v\y\v\o\d\p\6\z\o\y\c\k\2\q\x\i\z\6\y\m\n\f\f\5\9\b\z\2\s\0\7\h\h\w\3\u\q\v\l\g\c\v\4\a\m\9\7\w\l\9\e\k\4\i\0\u\j\z\4\k\c\l\0\u\t\l\3\f\e\9\x\4\v\k\y\v\s\g\c\l\8\v\0\g\h\n\3\7\7\m\f\v\7\0\j\z\f\r\g\6\g\q\m\c\a\g\d\3\r\a\v\3\0\f\a\5\h\g\c\e\j\x\h\o\v\a\z\q\4\6\o\q\o\v\h\f\r\g\p\0\i\e\2\s\l\f\6\v\e\n\g\v\r\l\d\o\p\3\v\c\j\f\v\q\5\u\p\w\z\f\5\u\x\8\d\e\e\i\m\c\c\5\8\m\t\a\x\9\i\j\h\6\d\g\8\z\m\8\g ]] 00:08:13.636 09:36:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:13.636 09:36:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:13.636 [2024-11-19 09:36:01.255512] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:13.636 [2024-11-19 09:36:01.255617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60651 ] 00:08:13.894 [2024-11-19 09:36:01.404355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.894 [2024-11-19 09:36:01.466218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.153 [2024-11-19 09:36:01.519820] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.153  [2024-11-19T09:36:01.776Z] Copying: 512/512 [B] (average 500 kBps) 00:08:14.153 00:08:14.153 09:36:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ u6ul0u5ppqk3uhdw50laahoafz2iue8govvt3fmopmowrqbdstlkupg0sey976r8am9kbfwslwdvl6rq41q2pntn3lc7wzeed469jyvybe7zzse9odeny0igkdcvo2rwc2vy5c09nmf7j4ogxu4dzn5gkovs3wm6ujzexen6r9zk0is2pycpk54p92cfm3wr079t8mfsl3oi98pdhsb8uzc8lbws38g9wmd1b47zx66q7qgja975zk89kcoaci49tmg6shi73eo1s3q3xuau93knizjhsa23c416c444o9e5nj511te46g9mjvyvodp6zoyck2qxiz6ymnff59bz2s07hhw3uqvlgcv4am97wl9ek4i0ujz4kcl0utl3fe9x4vkyvsgcl8v0ghn377mfv70jzfrg6gqmcagd3rav30fa5hgcejxhovazq46oqovhfrgp0ie2slf6vengvrldop3vcjfvq5upwzf5ux8deeimcc58mtax9ijh6dg8zm8g == \u\6\u\l\0\u\5\p\p\q\k\3\u\h\d\w\5\0\l\a\a\h\o\a\f\z\2\i\u\e\8\g\o\v\v\t\3\f\m\o\p\m\o\w\r\q\b\d\s\t\l\k\u\p\g\0\s\e\y\9\7\6\r\8\a\m\9\k\b\f\w\s\l\w\d\v\l\6\r\q\4\1\q\2\p\n\t\n\3\l\c\7\w\z\e\e\d\4\6\9\j\y\v\y\b\e\7\z\z\s\e\9\o\d\e\n\y\0\i\g\k\d\c\v\o\2\r\w\c\2\v\y\5\c\0\9\n\m\f\7\j\4\o\g\x\u\4\d\z\n\5\g\k\o\v\s\3\w\m\6\u\j\z\e\x\e\n\6\r\9\z\k\0\i\s\2\p\y\c\p\k\5\4\p\9\2\c\f\m\3\w\r\0\7\9\t\8\m\f\s\l\3\o\i\9\8\p\d\h\s\b\8\u\z\c\8\l\b\w\s\3\8\g\9\w\m\d\1\b\4\7\z\x\6\6\q\7\q\g\j\a\9\7\5\z\k\8\9\k\c\o\a\c\i\4\9\t\m\g\6\s\h\i\7\3\e\o\1\s\3\q\3\x\u\a\u\9\3\k\n\i\z\j\h\s\a\2\3\c\4\1\6\c\4\4\4\o\9\e\5\n\j\5\1\1\t\e\4\6\g\9\m\j\v\y\v\o\d\p\6\z\o\y\c\k\2\q\x\i\z\6\y\m\n\f\f\5\9\b\z\2\s\0\7\h\h\w\3\u\q\v\l\g\c\v\4\a\m\9\7\w\l\9\e\k\4\i\0\u\j\z\4\k\c\l\0\u\t\l\3\f\e\9\x\4\v\k\y\v\s\g\c\l\8\v\0\g\h\n\3\7\7\m\f\v\7\0\j\z\f\r\g\6\g\q\m\c\a\g\d\3\r\a\v\3\0\f\a\5\h\g\c\e\j\x\h\o\v\a\z\q\4\6\o\q\o\v\h\f\r\g\p\0\i\e\2\s\l\f\6\v\e\n\g\v\r\l\d\o\p\3\v\c\j\f\v\q\5\u\p\w\z\f\5\u\x\8\d\e\e\i\m\c\c\5\8\m\t\a\x\9\i\j\h\6\d\g\8\z\m\8\g ]] 00:08:14.153 09:36:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:14.153 09:36:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:14.412 [2024-11-19 09:36:01.825125] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:14.412 [2024-11-19 09:36:01.825522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60653 ] 00:08:14.412 [2024-11-19 09:36:01.979903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.670 [2024-11-19 09:36:02.044459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.670 [2024-11-19 09:36:02.100329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.670  [2024-11-19T09:36:02.553Z] Copying: 512/512 [B] (average 250 kBps) 00:08:14.930 00:08:14.930 ************************************ 00:08:14.930 END TEST dd_flags_misc_forced_aio 00:08:14.930 ************************************ 00:08:14.930 09:36:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ u6ul0u5ppqk3uhdw50laahoafz2iue8govvt3fmopmowrqbdstlkupg0sey976r8am9kbfwslwdvl6rq41q2pntn3lc7wzeed469jyvybe7zzse9odeny0igkdcvo2rwc2vy5c09nmf7j4ogxu4dzn5gkovs3wm6ujzexen6r9zk0is2pycpk54p92cfm3wr079t8mfsl3oi98pdhsb8uzc8lbws38g9wmd1b47zx66q7qgja975zk89kcoaci49tmg6shi73eo1s3q3xuau93knizjhsa23c416c444o9e5nj511te46g9mjvyvodp6zoyck2qxiz6ymnff59bz2s07hhw3uqvlgcv4am97wl9ek4i0ujz4kcl0utl3fe9x4vkyvsgcl8v0ghn377mfv70jzfrg6gqmcagd3rav30fa5hgcejxhovazq46oqovhfrgp0ie2slf6vengvrldop3vcjfvq5upwzf5ux8deeimcc58mtax9ijh6dg8zm8g == \u\6\u\l\0\u\5\p\p\q\k\3\u\h\d\w\5\0\l\a\a\h\o\a\f\z\2\i\u\e\8\g\o\v\v\t\3\f\m\o\p\m\o\w\r\q\b\d\s\t\l\k\u\p\g\0\s\e\y\9\7\6\r\8\a\m\9\k\b\f\w\s\l\w\d\v\l\6\r\q\4\1\q\2\p\n\t\n\3\l\c\7\w\z\e\e\d\4\6\9\j\y\v\y\b\e\7\z\z\s\e\9\o\d\e\n\y\0\i\g\k\d\c\v\o\2\r\w\c\2\v\y\5\c\0\9\n\m\f\7\j\4\o\g\x\u\4\d\z\n\5\g\k\o\v\s\3\w\m\6\u\j\z\e\x\e\n\6\r\9\z\k\0\i\s\2\p\y\c\p\k\5\4\p\9\2\c\f\m\3\w\r\0\7\9\t\8\m\f\s\l\3\o\i\9\8\p\d\h\s\b\8\u\z\c\8\l\b\w\s\3\8\g\9\w\m\d\1\b\4\7\z\x\6\6\q\7\q\g\j\a\9\7\5\z\k\8\9\k\c\o\a\c\i\4\9\t\m\g\6\s\h\i\7\3\e\o\1\s\3\q\3\x\u\a\u\9\3\k\n\i\z\j\h\s\a\2\3\c\4\1\6\c\4\4\4\o\9\e\5\n\j\5\1\1\t\e\4\6\g\9\m\j\v\y\v\o\d\p\6\z\o\y\c\k\2\q\x\i\z\6\y\m\n\f\f\5\9\b\z\2\s\0\7\h\h\w\3\u\q\v\l\g\c\v\4\a\m\9\7\w\l\9\e\k\4\i\0\u\j\z\4\k\c\l\0\u\t\l\3\f\e\9\x\4\v\k\y\v\s\g\c\l\8\v\0\g\h\n\3\7\7\m\f\v\7\0\j\z\f\r\g\6\g\q\m\c\a\g\d\3\r\a\v\3\0\f\a\5\h\g\c\e\j\x\h\o\v\a\z\q\4\6\o\q\o\v\h\f\r\g\p\0\i\e\2\s\l\f\6\v\e\n\g\v\r\l\d\o\p\3\v\c\j\f\v\q\5\u\p\w\z\f\5\u\x\8\d\e\e\i\m\c\c\5\8\m\t\a\x\9\i\j\h\6\d\g\8\z\m\8\g ]] 00:08:14.930 00:08:14.930 real 0m4.715s 00:08:14.930 user 0m2.570s 00:08:14.930 sys 0m1.169s 00:08:14.930 09:36:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.930 09:36:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:14.930 09:36:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:14.930 09:36:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:14.930 09:36:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:14.930 ************************************ 00:08:14.930 END TEST spdk_dd_posix 00:08:14.930 ************************************ 00:08:14.930 00:08:14.930 real 0m20.699s 00:08:14.930 user 0m10.044s 00:08:14.930 sys 0m6.620s 00:08:14.930 09:36:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.930 09:36:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:14.930 09:36:02 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:14.930 09:36:02 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.930 09:36:02 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.930 09:36:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:14.930 ************************************ 00:08:14.930 START TEST spdk_dd_malloc 00:08:14.930 ************************************ 00:08:14.930 09:36:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:14.930 * Looking for test storage... 00:08:14.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:14.930 09:36:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:14.930 09:36:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:14.930 09:36:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:15.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.189 --rc genhtml_branch_coverage=1 00:08:15.189 --rc genhtml_function_coverage=1 00:08:15.189 --rc genhtml_legend=1 00:08:15.189 --rc geninfo_all_blocks=1 00:08:15.189 --rc geninfo_unexecuted_blocks=1 00:08:15.189 00:08:15.189 ' 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:15.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.189 --rc genhtml_branch_coverage=1 00:08:15.189 --rc genhtml_function_coverage=1 00:08:15.189 --rc genhtml_legend=1 00:08:15.189 --rc geninfo_all_blocks=1 00:08:15.189 --rc geninfo_unexecuted_blocks=1 00:08:15.189 00:08:15.189 ' 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:15.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.189 --rc genhtml_branch_coverage=1 00:08:15.189 --rc genhtml_function_coverage=1 00:08:15.189 --rc genhtml_legend=1 00:08:15.189 --rc geninfo_all_blocks=1 00:08:15.189 --rc geninfo_unexecuted_blocks=1 00:08:15.189 00:08:15.189 ' 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:15.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.189 --rc genhtml_branch_coverage=1 00:08:15.189 --rc genhtml_function_coverage=1 00:08:15.189 --rc genhtml_legend=1 00:08:15.189 --rc geninfo_all_blocks=1 00:08:15.189 --rc geninfo_unexecuted_blocks=1 00:08:15.189 00:08:15.189 ' 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.189 09:36:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:15.190 ************************************ 00:08:15.190 START TEST dd_malloc_copy 00:08:15.190 ************************************ 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:15.190 09:36:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:15.190 [2024-11-19 09:36:02.730693] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:15.190 [2024-11-19 09:36:02.730810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60735 ] 00:08:15.190 { 00:08:15.190 "subsystems": [ 00:08:15.190 { 00:08:15.190 "subsystem": "bdev", 00:08:15.190 "config": [ 00:08:15.190 { 00:08:15.190 "params": { 00:08:15.190 "block_size": 512, 00:08:15.190 "num_blocks": 1048576, 00:08:15.190 "name": "malloc0" 00:08:15.190 }, 00:08:15.190 "method": "bdev_malloc_create" 00:08:15.190 }, 00:08:15.190 { 00:08:15.190 "params": { 00:08:15.190 "block_size": 512, 00:08:15.190 "num_blocks": 1048576, 00:08:15.190 "name": "malloc1" 00:08:15.190 }, 00:08:15.190 "method": "bdev_malloc_create" 00:08:15.190 }, 00:08:15.190 { 00:08:15.190 "method": "bdev_wait_for_examine" 00:08:15.190 } 00:08:15.190 ] 00:08:15.190 } 00:08:15.190 ] 00:08:15.190 } 00:08:15.449 [2024-11-19 09:36:02.884591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.449 [2024-11-19 09:36:02.954115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.449 [2024-11-19 09:36:03.011495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.826  [2024-11-19T09:36:05.384Z] Copying: 194/512 [MB] (194 MBps) [2024-11-19T09:36:06.357Z] Copying: 387/512 [MB] (192 MBps) [2024-11-19T09:36:06.616Z] Copying: 512/512 [MB] (average 193 MBps) 00:08:18.993 00:08:18.993 09:36:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:18.993 09:36:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:18.993 09:36:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:18.993 09:36:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:19.252 [2024-11-19 09:36:06.661852] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:19.252 [2024-11-19 09:36:06.662271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60788 ] 00:08:19.252 { 00:08:19.252 "subsystems": [ 00:08:19.252 { 00:08:19.252 "subsystem": "bdev", 00:08:19.252 "config": [ 00:08:19.252 { 00:08:19.252 "params": { 00:08:19.252 "block_size": 512, 00:08:19.252 "num_blocks": 1048576, 00:08:19.252 "name": "malloc0" 00:08:19.252 }, 00:08:19.252 "method": "bdev_malloc_create" 00:08:19.252 }, 00:08:19.252 { 00:08:19.252 "params": { 00:08:19.252 "block_size": 512, 00:08:19.252 "num_blocks": 1048576, 00:08:19.252 "name": "malloc1" 00:08:19.252 }, 00:08:19.252 "method": "bdev_malloc_create" 00:08:19.252 }, 00:08:19.252 { 00:08:19.252 "method": "bdev_wait_for_examine" 00:08:19.252 } 00:08:19.252 ] 00:08:19.252 } 00:08:19.252 ] 00:08:19.252 } 00:08:19.252 [2024-11-19 09:36:06.806742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.252 [2024-11-19 09:36:06.869050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.512 [2024-11-19 09:36:06.925757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.889  [2024-11-19T09:36:09.451Z] Copying: 209/512 [MB] (209 MBps) [2024-11-19T09:36:10.021Z] Copying: 410/512 [MB] (201 MBps) [2024-11-19T09:36:10.591Z] Copying: 512/512 [MB] (average 203 MBps) 00:08:22.968 00:08:22.968 00:08:22.968 real 0m7.721s 00:08:22.968 user 0m6.694s 00:08:22.968 sys 0m0.861s 00:08:22.968 09:36:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.968 ************************************ 00:08:22.968 END TEST dd_malloc_copy 00:08:22.968 09:36:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:22.968 ************************************ 00:08:22.968 ************************************ 00:08:22.968 END TEST spdk_dd_malloc 00:08:22.968 ************************************ 00:08:22.968 00:08:22.968 real 0m7.972s 00:08:22.968 user 0m6.824s 00:08:22.968 sys 0m0.985s 00:08:22.968 09:36:10 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.968 09:36:10 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:22.968 09:36:10 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:22.968 09:36:10 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:22.968 09:36:10 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.968 09:36:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:22.968 ************************************ 00:08:22.968 START TEST spdk_dd_bdev_to_bdev 00:08:22.968 ************************************ 00:08:22.968 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:22.968 * Looking for test storage... 00:08:22.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:22.968 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:22.968 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:08:22.968 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.228 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.229 --rc genhtml_branch_coverage=1 00:08:23.229 --rc genhtml_function_coverage=1 00:08:23.229 --rc genhtml_legend=1 00:08:23.229 --rc geninfo_all_blocks=1 00:08:23.229 --rc geninfo_unexecuted_blocks=1 00:08:23.229 00:08:23.229 ' 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.229 --rc genhtml_branch_coverage=1 00:08:23.229 --rc genhtml_function_coverage=1 00:08:23.229 --rc genhtml_legend=1 00:08:23.229 --rc geninfo_all_blocks=1 00:08:23.229 --rc geninfo_unexecuted_blocks=1 00:08:23.229 00:08:23.229 ' 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.229 --rc genhtml_branch_coverage=1 00:08:23.229 --rc genhtml_function_coverage=1 00:08:23.229 --rc genhtml_legend=1 00:08:23.229 --rc geninfo_all_blocks=1 00:08:23.229 --rc geninfo_unexecuted_blocks=1 00:08:23.229 00:08:23.229 ' 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.229 --rc genhtml_branch_coverage=1 00:08:23.229 --rc genhtml_function_coverage=1 00:08:23.229 --rc genhtml_legend=1 00:08:23.229 --rc geninfo_all_blocks=1 00:08:23.229 --rc geninfo_unexecuted_blocks=1 00:08:23.229 00:08:23.229 ' 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:23.229 ************************************ 00:08:23.229 START TEST dd_inflate_file 00:08:23.229 ************************************ 00:08:23.229 09:36:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:23.229 [2024-11-19 09:36:10.752526] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:23.229 [2024-11-19 09:36:10.752634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60906 ] 00:08:23.520 [2024-11-19 09:36:10.901622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.520 [2024-11-19 09:36:10.966537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.520 [2024-11-19 09:36:11.022036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.520  [2024-11-19T09:36:11.414Z] Copying: 64/64 [MB] (average 1488 MBps) 00:08:23.791 00:08:23.791 00:08:23.791 real 0m0.590s 00:08:23.791 user 0m0.345s 00:08:23.791 sys 0m0.302s 00:08:23.791 ************************************ 00:08:23.791 END TEST dd_inflate_file 00:08:23.791 ************************************ 00:08:23.791 09:36:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.791 09:36:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:23.791 09:36:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:23.791 09:36:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:23.791 09:36:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:23.791 09:36:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:23.791 09:36:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:23.791 09:36:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:23.791 09:36:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:23.791 09:36:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.791 09:36:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:23.791 ************************************ 00:08:23.791 START TEST dd_copy_to_out_bdev 00:08:23.791 ************************************ 00:08:23.791 09:36:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:23.791 { 00:08:23.791 "subsystems": [ 00:08:23.791 { 00:08:23.791 "subsystem": "bdev", 00:08:23.791 "config": [ 00:08:23.791 { 00:08:23.791 "params": { 00:08:23.791 "trtype": "pcie", 00:08:23.791 "traddr": "0000:00:10.0", 00:08:23.791 "name": "Nvme0" 00:08:23.791 }, 00:08:23.791 "method": "bdev_nvme_attach_controller" 00:08:23.791 }, 00:08:23.791 { 00:08:23.791 "params": { 00:08:23.791 "trtype": "pcie", 00:08:23.791 "traddr": "0000:00:11.0", 00:08:23.791 "name": "Nvme1" 00:08:23.791 }, 00:08:23.791 "method": "bdev_nvme_attach_controller" 00:08:23.791 }, 00:08:23.791 { 00:08:23.791 "method": "bdev_wait_for_examine" 00:08:23.791 } 00:08:23.791 ] 00:08:23.791 } 00:08:23.791 ] 00:08:23.791 } 00:08:23.791 [2024-11-19 09:36:11.396702] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:23.791 [2024-11-19 09:36:11.396808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60944 ] 00:08:24.051 [2024-11-19 09:36:11.544219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.051 [2024-11-19 09:36:11.593995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.051 [2024-11-19 09:36:11.650765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.425  [2024-11-19T09:36:13.048Z] Copying: 59/64 [MB] (59 MBps) [2024-11-19T09:36:13.307Z] Copying: 64/64 [MB] (average 58 MBps) 00:08:25.684 00:08:25.684 00:08:25.684 real 0m1.800s 00:08:25.684 user 0m1.566s 00:08:25.684 sys 0m1.437s 00:08:25.684 ************************************ 00:08:25.684 END TEST dd_copy_to_out_bdev 00:08:25.684 ************************************ 00:08:25.684 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:25.685 ************************************ 00:08:25.685 START TEST dd_offset_magic 00:08:25.685 ************************************ 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:25.685 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:25.685 [2024-11-19 09:36:13.241447] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:25.685 [2024-11-19 09:36:13.241540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60988 ] 00:08:25.685 { 00:08:25.685 "subsystems": [ 00:08:25.685 { 00:08:25.685 "subsystem": "bdev", 00:08:25.685 "config": [ 00:08:25.685 { 00:08:25.685 "params": { 00:08:25.685 "trtype": "pcie", 00:08:25.685 "traddr": "0000:00:10.0", 00:08:25.685 "name": "Nvme0" 00:08:25.685 }, 00:08:25.685 "method": "bdev_nvme_attach_controller" 00:08:25.685 }, 00:08:25.685 { 00:08:25.685 "params": { 00:08:25.685 "trtype": "pcie", 00:08:25.685 "traddr": "0000:00:11.0", 00:08:25.685 "name": "Nvme1" 00:08:25.685 }, 00:08:25.685 "method": "bdev_nvme_attach_controller" 00:08:25.685 }, 00:08:25.685 { 00:08:25.685 "method": "bdev_wait_for_examine" 00:08:25.685 } 00:08:25.685 ] 00:08:25.685 } 00:08:25.685 ] 00:08:25.685 } 00:08:25.944 [2024-11-19 09:36:13.382098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.944 [2024-11-19 09:36:13.445458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.944 [2024-11-19 09:36:13.503668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.203  [2024-11-19T09:36:14.084Z] Copying: 65/65 [MB] (average 866 MBps) 00:08:26.461 00:08:26.461 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:26.461 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:26.461 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:26.461 09:36:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:26.461 { 00:08:26.461 "subsystems": [ 00:08:26.461 { 00:08:26.461 "subsystem": "bdev", 00:08:26.461 "config": [ 00:08:26.461 { 00:08:26.461 "params": { 00:08:26.461 "trtype": "pcie", 00:08:26.461 "traddr": "0000:00:10.0", 00:08:26.461 "name": "Nvme0" 00:08:26.461 }, 00:08:26.461 "method": "bdev_nvme_attach_controller" 00:08:26.461 }, 00:08:26.461 { 00:08:26.461 "params": { 00:08:26.461 "trtype": "pcie", 00:08:26.461 "traddr": "0000:00:11.0", 00:08:26.461 "name": "Nvme1" 00:08:26.461 }, 00:08:26.461 "method": "bdev_nvme_attach_controller" 00:08:26.461 }, 00:08:26.461 { 00:08:26.461 "method": "bdev_wait_for_examine" 00:08:26.461 } 00:08:26.461 ] 00:08:26.461 } 00:08:26.461 ] 00:08:26.461 } 00:08:26.461 [2024-11-19 09:36:14.051687] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:26.461 [2024-11-19 09:36:14.051777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61002 ] 00:08:26.720 [2024-11-19 09:36:14.199460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.720 [2024-11-19 09:36:14.254125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.720 [2024-11-19 09:36:14.309827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.979  [2024-11-19T09:36:14.860Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:27.237 00:08:27.237 09:36:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:27.237 09:36:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:27.237 09:36:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:27.237 09:36:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:27.237 09:36:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:27.237 09:36:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:27.237 09:36:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:27.237 [2024-11-19 09:36:14.733209] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:27.237 [2024-11-19 09:36:14.733469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61024 ] 00:08:27.237 { 00:08:27.237 "subsystems": [ 00:08:27.237 { 00:08:27.237 "subsystem": "bdev", 00:08:27.237 "config": [ 00:08:27.237 { 00:08:27.237 "params": { 00:08:27.237 "trtype": "pcie", 00:08:27.237 "traddr": "0000:00:10.0", 00:08:27.237 "name": "Nvme0" 00:08:27.237 }, 00:08:27.237 "method": "bdev_nvme_attach_controller" 00:08:27.237 }, 00:08:27.237 { 00:08:27.237 "params": { 00:08:27.237 "trtype": "pcie", 00:08:27.237 "traddr": "0000:00:11.0", 00:08:27.237 "name": "Nvme1" 00:08:27.237 }, 00:08:27.237 "method": "bdev_nvme_attach_controller" 00:08:27.237 }, 00:08:27.237 { 00:08:27.237 "method": "bdev_wait_for_examine" 00:08:27.237 } 00:08:27.237 ] 00:08:27.237 } 00:08:27.237 ] 00:08:27.237 } 00:08:27.496 [2024-11-19 09:36:14.876282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.496 [2024-11-19 09:36:14.926578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.496 [2024-11-19 09:36:14.983270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.756  [2024-11-19T09:36:15.638Z] Copying: 65/65 [MB] (average 984 MBps) 00:08:28.015 00:08:28.015 09:36:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:28.015 09:36:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:28.015 09:36:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:28.015 09:36:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:28.015 { 00:08:28.015 "subsystems": [ 00:08:28.015 { 00:08:28.015 "subsystem": "bdev", 00:08:28.015 "config": [ 00:08:28.015 { 00:08:28.015 "params": { 00:08:28.015 "trtype": "pcie", 00:08:28.015 "traddr": "0000:00:10.0", 00:08:28.015 "name": "Nvme0" 00:08:28.015 }, 00:08:28.015 "method": "bdev_nvme_attach_controller" 00:08:28.015 }, 00:08:28.015 { 00:08:28.015 "params": { 00:08:28.015 "trtype": "pcie", 00:08:28.015 "traddr": "0000:00:11.0", 00:08:28.015 "name": "Nvme1" 00:08:28.015 }, 00:08:28.015 "method": "bdev_nvme_attach_controller" 00:08:28.015 }, 00:08:28.015 { 00:08:28.015 "method": "bdev_wait_for_examine" 00:08:28.015 } 00:08:28.015 ] 00:08:28.015 } 00:08:28.015 ] 00:08:28.015 } 00:08:28.015 [2024-11-19 09:36:15.530791] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:28.015 [2024-11-19 09:36:15.530896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61039 ] 00:08:28.274 [2024-11-19 09:36:15.678001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.274 [2024-11-19 09:36:15.739973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.274 [2024-11-19 09:36:15.795560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.533  [2024-11-19T09:36:16.416Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:28.793 00:08:28.793 ************************************ 00:08:28.793 END TEST dd_offset_magic 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:28.793 00:08:28.793 real 0m2.982s 00:08:28.793 user 0m2.139s 00:08:28.793 sys 0m0.917s 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:28.793 ************************************ 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:28.793 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:28.793 [2024-11-19 09:36:16.269573] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:28.793 [2024-11-19 09:36:16.269684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61076 ] 00:08:28.793 { 00:08:28.793 "subsystems": [ 00:08:28.793 { 00:08:28.793 "subsystem": "bdev", 00:08:28.793 "config": [ 00:08:28.793 { 00:08:28.793 "params": { 00:08:28.793 "trtype": "pcie", 00:08:28.793 "traddr": "0000:00:10.0", 00:08:28.793 "name": "Nvme0" 00:08:28.793 }, 00:08:28.793 "method": "bdev_nvme_attach_controller" 00:08:28.793 }, 00:08:28.793 { 00:08:28.793 "params": { 00:08:28.793 "trtype": "pcie", 00:08:28.793 "traddr": "0000:00:11.0", 00:08:28.793 "name": "Nvme1" 00:08:28.793 }, 00:08:28.793 "method": "bdev_nvme_attach_controller" 00:08:28.793 }, 00:08:28.793 { 00:08:28.793 "method": "bdev_wait_for_examine" 00:08:28.793 } 00:08:28.793 ] 00:08:28.793 } 00:08:28.793 ] 00:08:28.793 } 00:08:29.053 [2024-11-19 09:36:16.417717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.053 [2024-11-19 09:36:16.481205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.053 [2024-11-19 09:36:16.537537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.312  [2024-11-19T09:36:16.935Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:29.312 00:08:29.312 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:29.312 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:29.312 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:29.312 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:29.312 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:29.312 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:29.312 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:29.312 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:29.312 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:29.312 09:36:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:29.571 { 00:08:29.572 "subsystems": [ 00:08:29.572 { 00:08:29.572 "subsystem": "bdev", 00:08:29.572 "config": [ 00:08:29.572 { 00:08:29.572 "params": { 00:08:29.572 "trtype": "pcie", 00:08:29.572 "traddr": "0000:00:10.0", 00:08:29.572 "name": "Nvme0" 00:08:29.572 }, 00:08:29.572 "method": "bdev_nvme_attach_controller" 00:08:29.572 }, 00:08:29.572 { 00:08:29.572 "params": { 00:08:29.572 "trtype": "pcie", 00:08:29.572 "traddr": "0000:00:11.0", 00:08:29.572 "name": "Nvme1" 00:08:29.572 }, 00:08:29.572 "method": "bdev_nvme_attach_controller" 00:08:29.572 }, 00:08:29.572 { 00:08:29.572 "method": "bdev_wait_for_examine" 00:08:29.572 } 00:08:29.572 ] 00:08:29.572 } 00:08:29.572 ] 00:08:29.572 } 00:08:29.572 [2024-11-19 09:36:16.966037] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:29.572 [2024-11-19 09:36:16.966522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61091 ] 00:08:29.572 [2024-11-19 09:36:17.111061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.572 [2024-11-19 09:36:17.174487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.898 [2024-11-19 09:36:17.230807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.898  [2024-11-19T09:36:17.801Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:30.178 00:08:30.178 09:36:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:30.178 ************************************ 00:08:30.178 END TEST spdk_dd_bdev_to_bdev 00:08:30.178 ************************************ 00:08:30.178 00:08:30.178 real 0m7.156s 00:08:30.178 user 0m5.230s 00:08:30.178 sys 0m3.348s 00:08:30.178 09:36:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.178 09:36:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:30.178 09:36:17 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:30.178 09:36:17 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:30.178 09:36:17 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.178 09:36:17 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.178 09:36:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:30.178 ************************************ 00:08:30.178 START TEST spdk_dd_uring 00:08:30.178 ************************************ 00:08:30.178 09:36:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:30.178 * Looking for test storage... 00:08:30.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:30.178 09:36:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:30.178 09:36:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:08:30.178 09:36:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:30.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.439 --rc genhtml_branch_coverage=1 00:08:30.439 --rc genhtml_function_coverage=1 00:08:30.439 --rc genhtml_legend=1 00:08:30.439 --rc geninfo_all_blocks=1 00:08:30.439 --rc geninfo_unexecuted_blocks=1 00:08:30.439 00:08:30.439 ' 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:30.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.439 --rc genhtml_branch_coverage=1 00:08:30.439 --rc genhtml_function_coverage=1 00:08:30.439 --rc genhtml_legend=1 00:08:30.439 --rc geninfo_all_blocks=1 00:08:30.439 --rc geninfo_unexecuted_blocks=1 00:08:30.439 00:08:30.439 ' 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:30.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.439 --rc genhtml_branch_coverage=1 00:08:30.439 --rc genhtml_function_coverage=1 00:08:30.439 --rc genhtml_legend=1 00:08:30.439 --rc geninfo_all_blocks=1 00:08:30.439 --rc geninfo_unexecuted_blocks=1 00:08:30.439 00:08:30.439 ' 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:30.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.439 --rc genhtml_branch_coverage=1 00:08:30.439 --rc genhtml_function_coverage=1 00:08:30.439 --rc genhtml_legend=1 00:08:30.439 --rc geninfo_all_blocks=1 00:08:30.439 --rc geninfo_unexecuted_blocks=1 00:08:30.439 00:08:30.439 ' 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:30.439 ************************************ 00:08:30.439 START TEST dd_uring_copy 00:08:30.439 ************************************ 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:30.439 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:30.440 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:30.440 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:30.440 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:30.440 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:30.440 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=xtp0ebq0v8goaq2s27gjtyffex2ccs4v3odwp2veblqugbd01y88gft2ww64nl9aazabbl6oyhd1z7kk2yvww4ll7dz7wv4ifpm3uk1bd3rxlyl3pxkoiznpy4eoyo9fdczefyjskxb8iop2zqvucy1tgfgwiv0m8j9r8xpplm0vcga5t0s6dhqwhdh237zzy0pfpcl93vstg40zuwo3ro8jhtgdntlie6899tdkat1md9c3yzw6qq0f8fpbs3yfy4w0085npa2dwp8kr7wf5mb61w7qa3bn9v2urjjo1nl8vb1b46cofh63f93m9yrp7nvceg6wgc1phs3dvxyyw2cnwk85778mzrx1w2pl371j115kx5bvb6f5dzpl5imxy0odoza916iauml91pe40i1ouoijishoyxa9jrv8u6frst4akcwtsxgv37u96pm7qlm9ytaxb81wjycs164tfda5za25fg4tg30nwqeilwkejw5ohihtodeu8uhd875ts3o8ld14nmiuzy0qnqvtjkevearqv8idectctce2hw8zpd24dl5n92ku0xnd95uw0eq8ovztp4cyik72z5tnkegm4odtvubcrn3khwmaddln11gy7z7fmavu2h6qz0mgdn76qgmvkud7isxkbkpjjvig37d1b6fiwm4gfe7k7k99il02idd931nh0jja5j16tzko88nx9tiymjn6vm454hg4rpdtz5wooy5hujjgin9w5t9t92zleiyyuz6l03yoyjjl7oj2y7n2283omz7np2nozmnhvnkincxz484qwbd5d0ntdjwn1yvym8vh4ehax44zhth958a3lv9amsoyr9ifdmvvm4ffistrnknqpi6uyjebqk619ngqdiqxvge24pk088njy9tlfavgx7hm9mccr3a42gy5lgclto6wdj6id6b6838fnhisxhjydzr2enmigqz8me600vv0d1csrkvbmogxiygs53vas6619m57hrr2qfginpi3dhfuvzu9 00:08:30.440 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo xtp0ebq0v8goaq2s27gjtyffex2ccs4v3odwp2veblqugbd01y88gft2ww64nl9aazabbl6oyhd1z7kk2yvww4ll7dz7wv4ifpm3uk1bd3rxlyl3pxkoiznpy4eoyo9fdczefyjskxb8iop2zqvucy1tgfgwiv0m8j9r8xpplm0vcga5t0s6dhqwhdh237zzy0pfpcl93vstg40zuwo3ro8jhtgdntlie6899tdkat1md9c3yzw6qq0f8fpbs3yfy4w0085npa2dwp8kr7wf5mb61w7qa3bn9v2urjjo1nl8vb1b46cofh63f93m9yrp7nvceg6wgc1phs3dvxyyw2cnwk85778mzrx1w2pl371j115kx5bvb6f5dzpl5imxy0odoza916iauml91pe40i1ouoijishoyxa9jrv8u6frst4akcwtsxgv37u96pm7qlm9ytaxb81wjycs164tfda5za25fg4tg30nwqeilwkejw5ohihtodeu8uhd875ts3o8ld14nmiuzy0qnqvtjkevearqv8idectctce2hw8zpd24dl5n92ku0xnd95uw0eq8ovztp4cyik72z5tnkegm4odtvubcrn3khwmaddln11gy7z7fmavu2h6qz0mgdn76qgmvkud7isxkbkpjjvig37d1b6fiwm4gfe7k7k99il02idd931nh0jja5j16tzko88nx9tiymjn6vm454hg4rpdtz5wooy5hujjgin9w5t9t92zleiyyuz6l03yoyjjl7oj2y7n2283omz7np2nozmnhvnkincxz484qwbd5d0ntdjwn1yvym8vh4ehax44zhth958a3lv9amsoyr9ifdmvvm4ffistrnknqpi6uyjebqk619ngqdiqxvge24pk088njy9tlfavgx7hm9mccr3a42gy5lgclto6wdj6id6b6838fnhisxhjydzr2enmigqz8me600vv0d1csrkvbmogxiygs53vas6619m57hrr2qfginpi3dhfuvzu9 00:08:30.440 09:36:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:30.440 [2024-11-19 09:36:17.982399] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:30.440 [2024-11-19 09:36:17.982533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61169 ] 00:08:30.699 [2024-11-19 09:36:18.131375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.699 [2024-11-19 09:36:18.193086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.699 [2024-11-19 09:36:18.250025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.635  [2024-11-19T09:36:19.516Z] Copying: 511/511 [MB] (average 1034 MBps) 00:08:31.893 00:08:31.893 09:36:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:31.893 09:36:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:31.893 09:36:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:31.893 09:36:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:31.893 [2024-11-19 09:36:19.399697] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:31.893 [2024-11-19 09:36:19.399775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61191 ] 00:08:31.893 { 00:08:31.893 "subsystems": [ 00:08:31.893 { 00:08:31.893 "subsystem": "bdev", 00:08:31.893 "config": [ 00:08:31.893 { 00:08:31.893 "params": { 00:08:31.893 "block_size": 512, 00:08:31.893 "num_blocks": 1048576, 00:08:31.893 "name": "malloc0" 00:08:31.893 }, 00:08:31.893 "method": "bdev_malloc_create" 00:08:31.893 }, 00:08:31.893 { 00:08:31.893 "params": { 00:08:31.893 "filename": "/dev/zram1", 00:08:31.893 "name": "uring0" 00:08:31.893 }, 00:08:31.893 "method": "bdev_uring_create" 00:08:31.893 }, 00:08:31.893 { 00:08:31.893 "method": "bdev_wait_for_examine" 00:08:31.893 } 00:08:31.893 ] 00:08:31.893 } 00:08:31.893 ] 00:08:31.893 } 00:08:32.151 [2024-11-19 09:36:19.544990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.151 [2024-11-19 09:36:19.607477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.151 [2024-11-19 09:36:19.665835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.528  [2024-11-19T09:36:22.086Z] Copying: 219/512 [MB] (219 MBps) [2024-11-19T09:36:22.346Z] Copying: 454/512 [MB] (235 MBps) [2024-11-19T09:36:22.604Z] Copying: 512/512 [MB] (average 229 MBps) 00:08:34.981 00:08:34.981 09:36:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:34.981 09:36:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:34.981 09:36:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:34.981 09:36:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:34.981 [2024-11-19 09:36:22.545538] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:34.981 [2024-11-19 09:36:22.545642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61235 ] 00:08:34.981 { 00:08:34.982 "subsystems": [ 00:08:34.982 { 00:08:34.982 "subsystem": "bdev", 00:08:34.982 "config": [ 00:08:34.982 { 00:08:34.982 "params": { 00:08:34.982 "block_size": 512, 00:08:34.982 "num_blocks": 1048576, 00:08:34.982 "name": "malloc0" 00:08:34.982 }, 00:08:34.982 "method": "bdev_malloc_create" 00:08:34.982 }, 00:08:34.982 { 00:08:34.982 "params": { 00:08:34.982 "filename": "/dev/zram1", 00:08:34.982 "name": "uring0" 00:08:34.982 }, 00:08:34.982 "method": "bdev_uring_create" 00:08:34.982 }, 00:08:34.982 { 00:08:34.982 "method": "bdev_wait_for_examine" 00:08:34.982 } 00:08:34.982 ] 00:08:34.982 } 00:08:34.982 ] 00:08:34.982 } 00:08:35.241 [2024-11-19 09:36:22.692890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.241 [2024-11-19 09:36:22.756992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.241 [2024-11-19 09:36:22.812565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.679  [2024-11-19T09:36:25.235Z] Copying: 171/512 [MB] (171 MBps) [2024-11-19T09:36:26.170Z] Copying: 363/512 [MB] (192 MBps) [2024-11-19T09:36:26.430Z] Copying: 512/512 [MB] (average 178 MBps) 00:08:38.807 00:08:38.807 09:36:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:38.807 09:36:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ xtp0ebq0v8goaq2s27gjtyffex2ccs4v3odwp2veblqugbd01y88gft2ww64nl9aazabbl6oyhd1z7kk2yvww4ll7dz7wv4ifpm3uk1bd3rxlyl3pxkoiznpy4eoyo9fdczefyjskxb8iop2zqvucy1tgfgwiv0m8j9r8xpplm0vcga5t0s6dhqwhdh237zzy0pfpcl93vstg40zuwo3ro8jhtgdntlie6899tdkat1md9c3yzw6qq0f8fpbs3yfy4w0085npa2dwp8kr7wf5mb61w7qa3bn9v2urjjo1nl8vb1b46cofh63f93m9yrp7nvceg6wgc1phs3dvxyyw2cnwk85778mzrx1w2pl371j115kx5bvb6f5dzpl5imxy0odoza916iauml91pe40i1ouoijishoyxa9jrv8u6frst4akcwtsxgv37u96pm7qlm9ytaxb81wjycs164tfda5za25fg4tg30nwqeilwkejw5ohihtodeu8uhd875ts3o8ld14nmiuzy0qnqvtjkevearqv8idectctce2hw8zpd24dl5n92ku0xnd95uw0eq8ovztp4cyik72z5tnkegm4odtvubcrn3khwmaddln11gy7z7fmavu2h6qz0mgdn76qgmvkud7isxkbkpjjvig37d1b6fiwm4gfe7k7k99il02idd931nh0jja5j16tzko88nx9tiymjn6vm454hg4rpdtz5wooy5hujjgin9w5t9t92zleiyyuz6l03yoyjjl7oj2y7n2283omz7np2nozmnhvnkincxz484qwbd5d0ntdjwn1yvym8vh4ehax44zhth958a3lv9amsoyr9ifdmvvm4ffistrnknqpi6uyjebqk619ngqdiqxvge24pk088njy9tlfavgx7hm9mccr3a42gy5lgclto6wdj6id6b6838fnhisxhjydzr2enmigqz8me600vv0d1csrkvbmogxiygs53vas6619m57hrr2qfginpi3dhfuvzu9 == \x\t\p\0\e\b\q\0\v\8\g\o\a\q\2\s\2\7\g\j\t\y\f\f\e\x\2\c\c\s\4\v\3\o\d\w\p\2\v\e\b\l\q\u\g\b\d\0\1\y\8\8\g\f\t\2\w\w\6\4\n\l\9\a\a\z\a\b\b\l\6\o\y\h\d\1\z\7\k\k\2\y\v\w\w\4\l\l\7\d\z\7\w\v\4\i\f\p\m\3\u\k\1\b\d\3\r\x\l\y\l\3\p\x\k\o\i\z\n\p\y\4\e\o\y\o\9\f\d\c\z\e\f\y\j\s\k\x\b\8\i\o\p\2\z\q\v\u\c\y\1\t\g\f\g\w\i\v\0\m\8\j\9\r\8\x\p\p\l\m\0\v\c\g\a\5\t\0\s\6\d\h\q\w\h\d\h\2\3\7\z\z\y\0\p\f\p\c\l\9\3\v\s\t\g\4\0\z\u\w\o\3\r\o\8\j\h\t\g\d\n\t\l\i\e\6\8\9\9\t\d\k\a\t\1\m\d\9\c\3\y\z\w\6\q\q\0\f\8\f\p\b\s\3\y\f\y\4\w\0\0\8\5\n\p\a\2\d\w\p\8\k\r\7\w\f\5\m\b\6\1\w\7\q\a\3\b\n\9\v\2\u\r\j\j\o\1\n\l\8\v\b\1\b\4\6\c\o\f\h\6\3\f\9\3\m\9\y\r\p\7\n\v\c\e\g\6\w\g\c\1\p\h\s\3\d\v\x\y\y\w\2\c\n\w\k\8\5\7\7\8\m\z\r\x\1\w\2\p\l\3\7\1\j\1\1\5\k\x\5\b\v\b\6\f\5\d\z\p\l\5\i\m\x\y\0\o\d\o\z\a\9\1\6\i\a\u\m\l\9\1\p\e\4\0\i\1\o\u\o\i\j\i\s\h\o\y\x\a\9\j\r\v\8\u\6\f\r\s\t\4\a\k\c\w\t\s\x\g\v\3\7\u\9\6\p\m\7\q\l\m\9\y\t\a\x\b\8\1\w\j\y\c\s\1\6\4\t\f\d\a\5\z\a\2\5\f\g\4\t\g\3\0\n\w\q\e\i\l\w\k\e\j\w\5\o\h\i\h\t\o\d\e\u\8\u\h\d\8\7\5\t\s\3\o\8\l\d\1\4\n\m\i\u\z\y\0\q\n\q\v\t\j\k\e\v\e\a\r\q\v\8\i\d\e\c\t\c\t\c\e\2\h\w\8\z\p\d\2\4\d\l\5\n\9\2\k\u\0\x\n\d\9\5\u\w\0\e\q\8\o\v\z\t\p\4\c\y\i\k\7\2\z\5\t\n\k\e\g\m\4\o\d\t\v\u\b\c\r\n\3\k\h\w\m\a\d\d\l\n\1\1\g\y\7\z\7\f\m\a\v\u\2\h\6\q\z\0\m\g\d\n\7\6\q\g\m\v\k\u\d\7\i\s\x\k\b\k\p\j\j\v\i\g\3\7\d\1\b\6\f\i\w\m\4\g\f\e\7\k\7\k\9\9\i\l\0\2\i\d\d\9\3\1\n\h\0\j\j\a\5\j\1\6\t\z\k\o\8\8\n\x\9\t\i\y\m\j\n\6\v\m\4\5\4\h\g\4\r\p\d\t\z\5\w\o\o\y\5\h\u\j\j\g\i\n\9\w\5\t\9\t\9\2\z\l\e\i\y\y\u\z\6\l\0\3\y\o\y\j\j\l\7\o\j\2\y\7\n\2\2\8\3\o\m\z\7\n\p\2\n\o\z\m\n\h\v\n\k\i\n\c\x\z\4\8\4\q\w\b\d\5\d\0\n\t\d\j\w\n\1\y\v\y\m\8\v\h\4\e\h\a\x\4\4\z\h\t\h\9\5\8\a\3\l\v\9\a\m\s\o\y\r\9\i\f\d\m\v\v\m\4\f\f\i\s\t\r\n\k\n\q\p\i\6\u\y\j\e\b\q\k\6\1\9\n\g\q\d\i\q\x\v\g\e\2\4\p\k\0\8\8\n\j\y\9\t\l\f\a\v\g\x\7\h\m\9\m\c\c\r\3\a\4\2\g\y\5\l\g\c\l\t\o\6\w\d\j\6\i\d\6\b\6\8\3\8\f\n\h\i\s\x\h\j\y\d\z\r\2\e\n\m\i\g\q\z\8\m\e\6\0\0\v\v\0\d\1\c\s\r\k\v\b\m\o\g\x\i\y\g\s\5\3\v\a\s\6\6\1\9\m\5\7\h\r\r\2\q\f\g\i\n\p\i\3\d\h\f\u\v\z\u\9 ]] 00:08:38.807 09:36:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:38.808 09:36:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ xtp0ebq0v8goaq2s27gjtyffex2ccs4v3odwp2veblqugbd01y88gft2ww64nl9aazabbl6oyhd1z7kk2yvww4ll7dz7wv4ifpm3uk1bd3rxlyl3pxkoiznpy4eoyo9fdczefyjskxb8iop2zqvucy1tgfgwiv0m8j9r8xpplm0vcga5t0s6dhqwhdh237zzy0pfpcl93vstg40zuwo3ro8jhtgdntlie6899tdkat1md9c3yzw6qq0f8fpbs3yfy4w0085npa2dwp8kr7wf5mb61w7qa3bn9v2urjjo1nl8vb1b46cofh63f93m9yrp7nvceg6wgc1phs3dvxyyw2cnwk85778mzrx1w2pl371j115kx5bvb6f5dzpl5imxy0odoza916iauml91pe40i1ouoijishoyxa9jrv8u6frst4akcwtsxgv37u96pm7qlm9ytaxb81wjycs164tfda5za25fg4tg30nwqeilwkejw5ohihtodeu8uhd875ts3o8ld14nmiuzy0qnqvtjkevearqv8idectctce2hw8zpd24dl5n92ku0xnd95uw0eq8ovztp4cyik72z5tnkegm4odtvubcrn3khwmaddln11gy7z7fmavu2h6qz0mgdn76qgmvkud7isxkbkpjjvig37d1b6fiwm4gfe7k7k99il02idd931nh0jja5j16tzko88nx9tiymjn6vm454hg4rpdtz5wooy5hujjgin9w5t9t92zleiyyuz6l03yoyjjl7oj2y7n2283omz7np2nozmnhvnkincxz484qwbd5d0ntdjwn1yvym8vh4ehax44zhth958a3lv9amsoyr9ifdmvvm4ffistrnknqpi6uyjebqk619ngqdiqxvge24pk088njy9tlfavgx7hm9mccr3a42gy5lgclto6wdj6id6b6838fnhisxhjydzr2enmigqz8me600vv0d1csrkvbmogxiygs53vas6619m57hrr2qfginpi3dhfuvzu9 == \x\t\p\0\e\b\q\0\v\8\g\o\a\q\2\s\2\7\g\j\t\y\f\f\e\x\2\c\c\s\4\v\3\o\d\w\p\2\v\e\b\l\q\u\g\b\d\0\1\y\8\8\g\f\t\2\w\w\6\4\n\l\9\a\a\z\a\b\b\l\6\o\y\h\d\1\z\7\k\k\2\y\v\w\w\4\l\l\7\d\z\7\w\v\4\i\f\p\m\3\u\k\1\b\d\3\r\x\l\y\l\3\p\x\k\o\i\z\n\p\y\4\e\o\y\o\9\f\d\c\z\e\f\y\j\s\k\x\b\8\i\o\p\2\z\q\v\u\c\y\1\t\g\f\g\w\i\v\0\m\8\j\9\r\8\x\p\p\l\m\0\v\c\g\a\5\t\0\s\6\d\h\q\w\h\d\h\2\3\7\z\z\y\0\p\f\p\c\l\9\3\v\s\t\g\4\0\z\u\w\o\3\r\o\8\j\h\t\g\d\n\t\l\i\e\6\8\9\9\t\d\k\a\t\1\m\d\9\c\3\y\z\w\6\q\q\0\f\8\f\p\b\s\3\y\f\y\4\w\0\0\8\5\n\p\a\2\d\w\p\8\k\r\7\w\f\5\m\b\6\1\w\7\q\a\3\b\n\9\v\2\u\r\j\j\o\1\n\l\8\v\b\1\b\4\6\c\o\f\h\6\3\f\9\3\m\9\y\r\p\7\n\v\c\e\g\6\w\g\c\1\p\h\s\3\d\v\x\y\y\w\2\c\n\w\k\8\5\7\7\8\m\z\r\x\1\w\2\p\l\3\7\1\j\1\1\5\k\x\5\b\v\b\6\f\5\d\z\p\l\5\i\m\x\y\0\o\d\o\z\a\9\1\6\i\a\u\m\l\9\1\p\e\4\0\i\1\o\u\o\i\j\i\s\h\o\y\x\a\9\j\r\v\8\u\6\f\r\s\t\4\a\k\c\w\t\s\x\g\v\3\7\u\9\6\p\m\7\q\l\m\9\y\t\a\x\b\8\1\w\j\y\c\s\1\6\4\t\f\d\a\5\z\a\2\5\f\g\4\t\g\3\0\n\w\q\e\i\l\w\k\e\j\w\5\o\h\i\h\t\o\d\e\u\8\u\h\d\8\7\5\t\s\3\o\8\l\d\1\4\n\m\i\u\z\y\0\q\n\q\v\t\j\k\e\v\e\a\r\q\v\8\i\d\e\c\t\c\t\c\e\2\h\w\8\z\p\d\2\4\d\l\5\n\9\2\k\u\0\x\n\d\9\5\u\w\0\e\q\8\o\v\z\t\p\4\c\y\i\k\7\2\z\5\t\n\k\e\g\m\4\o\d\t\v\u\b\c\r\n\3\k\h\w\m\a\d\d\l\n\1\1\g\y\7\z\7\f\m\a\v\u\2\h\6\q\z\0\m\g\d\n\7\6\q\g\m\v\k\u\d\7\i\s\x\k\b\k\p\j\j\v\i\g\3\7\d\1\b\6\f\i\w\m\4\g\f\e\7\k\7\k\9\9\i\l\0\2\i\d\d\9\3\1\n\h\0\j\j\a\5\j\1\6\t\z\k\o\8\8\n\x\9\t\i\y\m\j\n\6\v\m\4\5\4\h\g\4\r\p\d\t\z\5\w\o\o\y\5\h\u\j\j\g\i\n\9\w\5\t\9\t\9\2\z\l\e\i\y\y\u\z\6\l\0\3\y\o\y\j\j\l\7\o\j\2\y\7\n\2\2\8\3\o\m\z\7\n\p\2\n\o\z\m\n\h\v\n\k\i\n\c\x\z\4\8\4\q\w\b\d\5\d\0\n\t\d\j\w\n\1\y\v\y\m\8\v\h\4\e\h\a\x\4\4\z\h\t\h\9\5\8\a\3\l\v\9\a\m\s\o\y\r\9\i\f\d\m\v\v\m\4\f\f\i\s\t\r\n\k\n\q\p\i\6\u\y\j\e\b\q\k\6\1\9\n\g\q\d\i\q\x\v\g\e\2\4\p\k\0\8\8\n\j\y\9\t\l\f\a\v\g\x\7\h\m\9\m\c\c\r\3\a\4\2\g\y\5\l\g\c\l\t\o\6\w\d\j\6\i\d\6\b\6\8\3\8\f\n\h\i\s\x\h\j\y\d\z\r\2\e\n\m\i\g\q\z\8\m\e\6\0\0\v\v\0\d\1\c\s\r\k\v\b\m\o\g\x\i\y\g\s\5\3\v\a\s\6\6\1\9\m\5\7\h\r\r\2\q\f\g\i\n\p\i\3\d\h\f\u\v\z\u\9 ]] 00:08:38.808 09:36:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:39.066 09:36:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:39.066 09:36:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:39.066 09:36:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:39.066 09:36:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:39.066 [2024-11-19 09:36:26.687354] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:39.066 [2024-11-19 09:36:26.687593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61302 ] 00:08:39.324 { 00:08:39.324 "subsystems": [ 00:08:39.324 { 00:08:39.324 "subsystem": "bdev", 00:08:39.324 "config": [ 00:08:39.324 { 00:08:39.324 "params": { 00:08:39.324 "block_size": 512, 00:08:39.324 "num_blocks": 1048576, 00:08:39.324 "name": "malloc0" 00:08:39.324 }, 00:08:39.324 "method": "bdev_malloc_create" 00:08:39.324 }, 00:08:39.324 { 00:08:39.324 "params": { 00:08:39.324 "filename": "/dev/zram1", 00:08:39.324 "name": "uring0" 00:08:39.324 }, 00:08:39.324 "method": "bdev_uring_create" 00:08:39.324 }, 00:08:39.324 { 00:08:39.324 "method": "bdev_wait_for_examine" 00:08:39.324 } 00:08:39.324 ] 00:08:39.324 } 00:08:39.324 ] 00:08:39.324 } 00:08:39.324 [2024-11-19 09:36:26.833128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.324 [2024-11-19 09:36:26.891340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.324 [2024-11-19 09:36:26.947388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.769  [2024-11-19T09:36:29.331Z] Copying: 153/512 [MB] (153 MBps) [2024-11-19T09:36:30.269Z] Copying: 308/512 [MB] (154 MBps) [2024-11-19T09:36:30.528Z] Copying: 461/512 [MB] (153 MBps) [2024-11-19T09:36:31.095Z] Copying: 512/512 [MB] (average 153 MBps) 00:08:43.472 00:08:43.472 09:36:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:43.472 09:36:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:43.472 09:36:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:43.472 09:36:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:43.472 09:36:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:43.472 09:36:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:43.472 09:36:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:43.472 09:36:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:43.472 [2024-11-19 09:36:30.922915] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:43.472 [2024-11-19 09:36:30.923338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61361 ] 00:08:43.472 { 00:08:43.472 "subsystems": [ 00:08:43.472 { 00:08:43.472 "subsystem": "bdev", 00:08:43.472 "config": [ 00:08:43.472 { 00:08:43.472 "params": { 00:08:43.472 "block_size": 512, 00:08:43.472 "num_blocks": 1048576, 00:08:43.472 "name": "malloc0" 00:08:43.472 }, 00:08:43.472 "method": "bdev_malloc_create" 00:08:43.472 }, 00:08:43.472 { 00:08:43.472 "params": { 00:08:43.472 "filename": "/dev/zram1", 00:08:43.472 "name": "uring0" 00:08:43.472 }, 00:08:43.472 "method": "bdev_uring_create" 00:08:43.472 }, 00:08:43.472 { 00:08:43.472 "params": { 00:08:43.472 "name": "uring0" 00:08:43.472 }, 00:08:43.472 "method": "bdev_uring_delete" 00:08:43.472 }, 00:08:43.473 { 00:08:43.473 "method": "bdev_wait_for_examine" 00:08:43.473 } 00:08:43.473 ] 00:08:43.473 } 00:08:43.473 ] 00:08:43.473 } 00:08:43.731 [2024-11-19 09:36:31.107931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.731 [2024-11-19 09:36:31.173803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.731 [2024-11-19 09:36:31.228769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.990  [2024-11-19T09:36:31.872Z] Copying: 0/0 [B] (average 0 Bps) 00:08:44.249 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:44.249 09:36:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:44.508 [2024-11-19 09:36:31.879838] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:44.508 [2024-11-19 09:36:31.880100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61391 ] 00:08:44.508 { 00:08:44.508 "subsystems": [ 00:08:44.508 { 00:08:44.508 "subsystem": "bdev", 00:08:44.508 "config": [ 00:08:44.508 { 00:08:44.508 "params": { 00:08:44.508 "block_size": 512, 00:08:44.508 "num_blocks": 1048576, 00:08:44.508 "name": "malloc0" 00:08:44.508 }, 00:08:44.508 "method": "bdev_malloc_create" 00:08:44.508 }, 00:08:44.508 { 00:08:44.508 "params": { 00:08:44.508 "filename": "/dev/zram1", 00:08:44.508 "name": "uring0" 00:08:44.508 }, 00:08:44.508 "method": "bdev_uring_create" 00:08:44.508 }, 00:08:44.508 { 00:08:44.508 "params": { 00:08:44.508 "name": "uring0" 00:08:44.508 }, 00:08:44.508 "method": "bdev_uring_delete" 00:08:44.508 }, 00:08:44.508 { 00:08:44.508 "method": "bdev_wait_for_examine" 00:08:44.508 } 00:08:44.508 ] 00:08:44.508 } 00:08:44.508 ] 00:08:44.508 } 00:08:44.508 [2024-11-19 09:36:32.023847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.508 [2024-11-19 09:36:32.077763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.766 [2024-11-19 09:36:32.132068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.766 [2024-11-19 09:36:32.334059] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:44.766 [2024-11-19 09:36:32.334115] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:44.766 [2024-11-19 09:36:32.334143] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:44.766 [2024-11-19 09:36:32.334153] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:45.025 [2024-11-19 09:36:32.645914] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:45.284 09:36:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:08:45.284 09:36:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:45.284 09:36:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:08:45.284 09:36:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:08:45.284 09:36:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:08:45.284 09:36:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:45.284 09:36:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:45.284 09:36:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:45.284 09:36:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:45.284 09:36:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:45.284 09:36:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:45.284 09:36:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:45.542 00:08:45.542 ************************************ 00:08:45.542 END TEST dd_uring_copy 00:08:45.542 ************************************ 00:08:45.542 real 0m15.112s 00:08:45.542 user 0m10.271s 00:08:45.542 sys 0m12.551s 00:08:45.542 09:36:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.542 09:36:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:45.542 00:08:45.542 real 0m15.357s 00:08:45.542 user 0m10.416s 00:08:45.542 sys 0m12.651s 00:08:45.542 09:36:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.542 09:36:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:45.542 ************************************ 00:08:45.542 END TEST spdk_dd_uring 00:08:45.542 ************************************ 00:08:45.542 09:36:33 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:45.542 09:36:33 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.542 09:36:33 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.542 09:36:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:45.542 ************************************ 00:08:45.542 START TEST spdk_dd_sparse 00:08:45.542 ************************************ 00:08:45.542 09:36:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:45.801 * Looking for test storage... 00:08:45.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.801 09:36:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:45.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.802 --rc genhtml_branch_coverage=1 00:08:45.802 --rc genhtml_function_coverage=1 00:08:45.802 --rc genhtml_legend=1 00:08:45.802 --rc geninfo_all_blocks=1 00:08:45.802 --rc geninfo_unexecuted_blocks=1 00:08:45.802 00:08:45.802 ' 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:45.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.802 --rc genhtml_branch_coverage=1 00:08:45.802 --rc genhtml_function_coverage=1 00:08:45.802 --rc genhtml_legend=1 00:08:45.802 --rc geninfo_all_blocks=1 00:08:45.802 --rc geninfo_unexecuted_blocks=1 00:08:45.802 00:08:45.802 ' 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:45.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.802 --rc genhtml_branch_coverage=1 00:08:45.802 --rc genhtml_function_coverage=1 00:08:45.802 --rc genhtml_legend=1 00:08:45.802 --rc geninfo_all_blocks=1 00:08:45.802 --rc geninfo_unexecuted_blocks=1 00:08:45.802 00:08:45.802 ' 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:45.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.802 --rc genhtml_branch_coverage=1 00:08:45.802 --rc genhtml_function_coverage=1 00:08:45.802 --rc genhtml_legend=1 00:08:45.802 --rc geninfo_all_blocks=1 00:08:45.802 --rc geninfo_unexecuted_blocks=1 00:08:45.802 00:08:45.802 ' 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:45.802 1+0 records in 00:08:45.802 1+0 records out 00:08:45.802 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00697629 s, 601 MB/s 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:45.802 1+0 records in 00:08:45.802 1+0 records out 00:08:45.802 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00756164 s, 555 MB/s 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:45.802 1+0 records in 00:08:45.802 1+0 records out 00:08:45.802 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00730166 s, 574 MB/s 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:45.802 ************************************ 00:08:45.802 START TEST dd_sparse_file_to_file 00:08:45.802 ************************************ 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:45.802 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:45.802 { 00:08:45.802 "subsystems": [ 00:08:45.802 { 00:08:45.802 "subsystem": "bdev", 00:08:45.802 "config": [ 00:08:45.802 { 00:08:45.802 "params": { 00:08:45.802 "block_size": 4096, 00:08:45.802 "filename": "dd_sparse_aio_disk", 00:08:45.802 "name": "dd_aio" 00:08:45.802 }, 00:08:45.802 "method": "bdev_aio_create" 00:08:45.802 }, 00:08:45.802 { 00:08:45.802 "params": { 00:08:45.802 "lvs_name": "dd_lvstore", 00:08:45.802 "bdev_name": "dd_aio" 00:08:45.802 }, 00:08:45.802 "method": "bdev_lvol_create_lvstore" 00:08:45.802 }, 00:08:45.802 { 00:08:45.802 "method": "bdev_wait_for_examine" 00:08:45.802 } 00:08:45.802 ] 00:08:45.802 } 00:08:45.802 ] 00:08:45.802 } 00:08:45.802 [2024-11-19 09:36:33.390252] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:45.802 [2024-11-19 09:36:33.390344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61486 ] 00:08:46.061 [2024-11-19 09:36:33.539288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.061 [2024-11-19 09:36:33.598244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.061 [2024-11-19 09:36:33.652569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.319  [2024-11-19T09:36:34.201Z] Copying: 12/36 [MB] (average 1000 MBps) 00:08:46.578 00:08:46.578 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:46.578 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:46.578 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:46.578 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:46.578 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:46.578 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:46.578 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:46.578 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:46.578 ************************************ 00:08:46.579 END TEST dd_sparse_file_to_file 00:08:46.579 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:46.579 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:46.579 00:08:46.579 real 0m0.652s 00:08:46.579 user 0m0.406s 00:08:46.579 sys 0m0.327s 00:08:46.579 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.579 09:36:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:46.579 ************************************ 00:08:46.579 09:36:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:46.579 09:36:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.579 09:36:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.579 09:36:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:46.579 ************************************ 00:08:46.579 START TEST dd_sparse_file_to_bdev 00:08:46.579 ************************************ 00:08:46.579 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:08:46.579 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:46.579 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:46.579 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:46.579 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:46.579 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:46.579 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:46.579 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:46.579 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:46.579 { 00:08:46.579 "subsystems": [ 00:08:46.579 { 00:08:46.579 "subsystem": "bdev", 00:08:46.579 "config": [ 00:08:46.579 { 00:08:46.579 "params": { 00:08:46.579 "block_size": 4096, 00:08:46.579 "filename": "dd_sparse_aio_disk", 00:08:46.579 "name": "dd_aio" 00:08:46.579 }, 00:08:46.579 "method": "bdev_aio_create" 00:08:46.579 }, 00:08:46.579 { 00:08:46.579 "params": { 00:08:46.579 "lvs_name": "dd_lvstore", 00:08:46.579 "lvol_name": "dd_lvol", 00:08:46.579 "size_in_mib": 36, 00:08:46.579 "thin_provision": true 00:08:46.579 }, 00:08:46.579 "method": "bdev_lvol_create" 00:08:46.579 }, 00:08:46.579 { 00:08:46.579 "method": "bdev_wait_for_examine" 00:08:46.579 } 00:08:46.579 ] 00:08:46.579 } 00:08:46.579 ] 00:08:46.579 } 00:08:46.579 [2024-11-19 09:36:34.094789] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:46.579 [2024-11-19 09:36:34.094880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61534 ] 00:08:46.838 [2024-11-19 09:36:34.246476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.838 [2024-11-19 09:36:34.317197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.838 [2024-11-19 09:36:34.376955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.097  [2024-11-19T09:36:34.720Z] Copying: 12/36 [MB] (average 545 MBps) 00:08:47.097 00:08:47.097 00:08:47.097 real 0m0.649s 00:08:47.097 user 0m0.395s 00:08:47.097 sys 0m0.354s 00:08:47.097 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.097 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:47.097 ************************************ 00:08:47.097 END TEST dd_sparse_file_to_bdev 00:08:47.097 ************************************ 00:08:47.356 09:36:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:47.356 09:36:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.356 09:36:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.356 09:36:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:47.356 ************************************ 00:08:47.356 START TEST dd_sparse_bdev_to_file 00:08:47.356 ************************************ 00:08:47.356 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:08:47.356 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:47.356 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:47.356 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:47.356 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:47.356 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:47.356 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:47.356 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:47.356 09:36:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:47.356 { 00:08:47.356 "subsystems": [ 00:08:47.356 { 00:08:47.356 "subsystem": "bdev", 00:08:47.356 "config": [ 00:08:47.356 { 00:08:47.356 "params": { 00:08:47.356 "block_size": 4096, 00:08:47.356 "filename": "dd_sparse_aio_disk", 00:08:47.356 "name": "dd_aio" 00:08:47.356 }, 00:08:47.356 "method": "bdev_aio_create" 00:08:47.356 }, 00:08:47.356 { 00:08:47.356 "method": "bdev_wait_for_examine" 00:08:47.356 } 00:08:47.356 ] 00:08:47.356 } 00:08:47.356 ] 00:08:47.356 } 00:08:47.356 [2024-11-19 09:36:34.795895] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:47.357 [2024-11-19 09:36:34.796000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61572 ] 00:08:47.357 [2024-11-19 09:36:34.947942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.616 [2024-11-19 09:36:35.013500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.616 [2024-11-19 09:36:35.069879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.616  [2024-11-19T09:36:35.497Z] Copying: 12/36 [MB] (average 1090 MBps) 00:08:47.874 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:47.874 00:08:47.874 real 0m0.644s 00:08:47.874 user 0m0.401s 00:08:47.874 sys 0m0.347s 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.874 ************************************ 00:08:47.874 END TEST dd_sparse_bdev_to_file 00:08:47.874 ************************************ 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:47.874 09:36:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:47.874 00:08:47.874 real 0m2.348s 00:08:47.875 user 0m1.391s 00:08:47.875 sys 0m1.239s 00:08:47.875 09:36:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.875 ************************************ 00:08:47.875 END TEST spdk_dd_sparse 00:08:47.875 ************************************ 00:08:47.875 09:36:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:47.875 09:36:35 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:47.875 09:36:35 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.875 09:36:35 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.875 09:36:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:47.875 ************************************ 00:08:47.875 START TEST spdk_dd_negative 00:08:47.875 ************************************ 00:08:47.875 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:48.134 * Looking for test storage... 00:08:48.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:48.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.134 --rc genhtml_branch_coverage=1 00:08:48.134 --rc genhtml_function_coverage=1 00:08:48.134 --rc genhtml_legend=1 00:08:48.134 --rc geninfo_all_blocks=1 00:08:48.134 --rc geninfo_unexecuted_blocks=1 00:08:48.134 00:08:48.134 ' 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:48.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.134 --rc genhtml_branch_coverage=1 00:08:48.134 --rc genhtml_function_coverage=1 00:08:48.134 --rc genhtml_legend=1 00:08:48.134 --rc geninfo_all_blocks=1 00:08:48.134 --rc geninfo_unexecuted_blocks=1 00:08:48.134 00:08:48.134 ' 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:48.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.134 --rc genhtml_branch_coverage=1 00:08:48.134 --rc genhtml_function_coverage=1 00:08:48.134 --rc genhtml_legend=1 00:08:48.134 --rc geninfo_all_blocks=1 00:08:48.134 --rc geninfo_unexecuted_blocks=1 00:08:48.134 00:08:48.134 ' 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:48.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.134 --rc genhtml_branch_coverage=1 00:08:48.134 --rc genhtml_function_coverage=1 00:08:48.134 --rc genhtml_legend=1 00:08:48.134 --rc geninfo_all_blocks=1 00:08:48.134 --rc geninfo_unexecuted_blocks=1 00:08:48.134 00:08:48.134 ' 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:48.134 ************************************ 00:08:48.134 START TEST dd_invalid_arguments 00:08:48.134 ************************************ 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:48.134 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.135 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.135 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.135 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.135 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.135 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.135 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.135 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.135 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:48.135 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:48.135 00:08:48.135 CPU options: 00:08:48.135 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:48.135 (like [0,1,10]) 00:08:48.135 --lcores lcore to CPU mapping list. The list is in the format: 00:08:48.135 [<,lcores[@CPUs]>...] 00:08:48.135 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:48.135 Within the group, '-' is used for range separator, 00:08:48.135 ',' is used for single number separator. 00:08:48.135 '( )' can be omitted for single element group, 00:08:48.135 '@' can be omitted if cpus and lcores have the same value 00:08:48.135 --disable-cpumask-locks Disable CPU core lock files. 00:08:48.135 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:48.135 pollers in the app support interrupt mode) 00:08:48.135 -p, --main-core main (primary) core for DPDK 00:08:48.135 00:08:48.135 Configuration options: 00:08:48.135 -c, --config, --json JSON config file 00:08:48.135 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:48.135 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:48.135 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:48.135 --rpcs-allowed comma-separated list of permitted RPCS 00:08:48.135 --json-ignore-init-errors don't exit on invalid config entry 00:08:48.135 00:08:48.135 Memory options: 00:08:48.135 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:48.135 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:48.135 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:48.135 -R, --huge-unlink unlink huge files after initialization 00:08:48.135 -n, --mem-channels number of memory channels used for DPDK 00:08:48.135 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:48.135 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:48.135 --no-huge run without using hugepages 00:08:48.135 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:48.135 -i, --shm-id shared memory ID (optional) 00:08:48.135 -g, --single-file-segments force creating just one hugetlbfs file 00:08:48.135 00:08:48.135 PCI options: 00:08:48.135 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:48.135 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:48.135 -u, --no-pci disable PCI access 00:08:48.135 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:48.135 00:08:48.135 Log options: 00:08:48.135 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:48.135 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:48.135 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:48.135 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:48.135 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:08:48.135 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:08:48.135 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:08:48.135 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:08:48.135 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:08:48.135 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:08:48.135 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:48.135 --silence-noticelog disable notice level logging to stderr 00:08:48.135 00:08:48.135 Trace options: 00:08:48.135 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:48.135 setting 0 to disable trace (default 32768) 00:08:48.135 Tracepoints vary in size and can use more than one trace entry. 00:08:48.135 -e, --tpoint-group [:] 00:08:48.135 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:48.135 [2024-11-19 09:36:35.749391] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:48.394 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:48.394 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:08:48.394 bdev_raid, scheduler, all). 00:08:48.394 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:48.394 a tracepoint group. First tpoint inside a group can be enabled by 00:08:48.394 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:48.394 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:48.394 in /include/spdk_internal/trace_defs.h 00:08:48.394 00:08:48.394 Other options: 00:08:48.394 -h, --help show this usage 00:08:48.394 -v, --version print SPDK version 00:08:48.394 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:48.394 --env-context Opaque context for use of the env implementation 00:08:48.394 00:08:48.394 Application specific: 00:08:48.394 [--------- DD Options ---------] 00:08:48.394 --if Input file. Must specify either --if or --ib. 00:08:48.394 --ib Input bdev. Must specifier either --if or --ib 00:08:48.394 --of Output file. Must specify either --of or --ob. 00:08:48.394 --ob Output bdev. Must specify either --of or --ob. 00:08:48.394 --iflag Input file flags. 00:08:48.394 --oflag Output file flags. 00:08:48.394 --bs I/O unit size (default: 4096) 00:08:48.394 --qd Queue depth (default: 2) 00:08:48.394 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:48.394 --skip Skip this many I/O units at start of input. (default: 0) 00:08:48.394 --seek Skip this many I/O units at start of output. (default: 0) 00:08:48.394 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:48.394 --sparse Enable hole skipping in input target 00:08:48.394 Available iflag and oflag values: 00:08:48.394 append - append mode 00:08:48.394 direct - use direct I/O for data 00:08:48.394 directory - fail unless a directory 00:08:48.394 dsync - use synchronized I/O for data 00:08:48.394 noatime - do not update access time 00:08:48.394 noctty - do not assign controlling terminal from file 00:08:48.394 nofollow - do not follow symlinks 00:08:48.394 nonblock - use non-blocking I/O 00:08:48.394 sync - use synchronized I/O for data and metadata 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.394 00:08:48.394 real 0m0.075s 00:08:48.394 user 0m0.047s 00:08:48.394 sys 0m0.027s 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.394 ************************************ 00:08:48.394 END TEST dd_invalid_arguments 00:08:48.394 ************************************ 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:48.394 ************************************ 00:08:48.394 START TEST dd_double_input 00:08:48.394 ************************************ 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:48.394 [2024-11-19 09:36:35.870800] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.394 00:08:48.394 real 0m0.069s 00:08:48.394 user 0m0.039s 00:08:48.394 sys 0m0.029s 00:08:48.394 ************************************ 00:08:48.394 END TEST dd_double_input 00:08:48.394 ************************************ 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:48.394 ************************************ 00:08:48.394 START TEST dd_double_output 00:08:48.394 ************************************ 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.394 09:36:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.395 09:36:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.395 09:36:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.395 09:36:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:48.395 [2024-11-19 09:36:35.996392] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:48.395 09:36:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:08:48.395 09:36:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.395 09:36:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.395 09:36:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.395 00:08:48.395 real 0m0.071s 00:08:48.395 user 0m0.042s 00:08:48.395 sys 0m0.029s 00:08:48.395 09:36:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.395 09:36:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:48.395 ************************************ 00:08:48.395 END TEST dd_double_output 00:08:48.395 ************************************ 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:48.654 ************************************ 00:08:48.654 START TEST dd_no_input 00:08:48.654 ************************************ 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:48.654 [2024-11-19 09:36:36.121071] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.654 00:08:48.654 real 0m0.069s 00:08:48.654 user 0m0.042s 00:08:48.654 sys 0m0.027s 00:08:48.654 ************************************ 00:08:48.654 END TEST dd_no_input 00:08:48.654 ************************************ 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:48.654 ************************************ 00:08:48.654 START TEST dd_no_output 00:08:48.654 ************************************ 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.654 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.655 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.655 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.655 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.655 [2024-11-19 09:36:36.247133] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:48.655 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:08:48.655 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.655 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.655 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.655 00:08:48.655 real 0m0.077s 00:08:48.655 user 0m0.050s 00:08:48.655 sys 0m0.026s 00:08:48.655 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.655 09:36:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:48.655 ************************************ 00:08:48.655 END TEST dd_no_output 00:08:48.655 ************************************ 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:48.914 ************************************ 00:08:48.914 START TEST dd_wrong_blocksize 00:08:48.914 ************************************ 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:48.914 [2024-11-19 09:36:36.375871] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.914 00:08:48.914 real 0m0.078s 00:08:48.914 user 0m0.051s 00:08:48.914 sys 0m0.027s 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.914 ************************************ 00:08:48.914 END TEST dd_wrong_blocksize 00:08:48.914 ************************************ 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:48.914 ************************************ 00:08:48.914 START TEST dd_smaller_blocksize 00:08:48.914 ************************************ 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.914 09:36:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:48.914 [2024-11-19 09:36:36.507804] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:48.914 [2024-11-19 09:36:36.507908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61793 ] 00:08:49.174 [2024-11-19 09:36:36.659331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.174 [2024-11-19 09:36:36.730919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.174 [2024-11-19 09:36:36.788605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.743 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:50.060 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:50.060 [2024-11-19 09:36:37.398648] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:50.060 [2024-11-19 09:36:37.398749] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.060 [2024-11-19 09:36:37.520882] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:50.060 00:08:50.060 real 0m1.136s 00:08:50.060 user 0m0.413s 00:08:50.060 sys 0m0.615s 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:50.060 ************************************ 00:08:50.060 END TEST dd_smaller_blocksize 00:08:50.060 ************************************ 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:50.060 ************************************ 00:08:50.060 START TEST dd_invalid_count 00:08:50.060 ************************************ 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:50.060 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:50.333 [2024-11-19 09:36:37.688276] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:50.333 00:08:50.333 real 0m0.073s 00:08:50.333 user 0m0.051s 00:08:50.333 sys 0m0.021s 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:50.333 ************************************ 00:08:50.333 END TEST dd_invalid_count 00:08:50.333 ************************************ 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:50.333 ************************************ 00:08:50.333 START TEST dd_invalid_oflag 00:08:50.333 ************************************ 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:50.333 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:50.334 [2024-11-19 09:36:37.810807] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:50.334 00:08:50.334 real 0m0.073s 00:08:50.334 user 0m0.044s 00:08:50.334 sys 0m0.028s 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:50.334 ************************************ 00:08:50.334 END TEST dd_invalid_oflag 00:08:50.334 ************************************ 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:50.334 ************************************ 00:08:50.334 START TEST dd_invalid_iflag 00:08:50.334 ************************************ 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:50.334 [2024-11-19 09:36:37.935383] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:50.334 00:08:50.334 real 0m0.070s 00:08:50.334 user 0m0.040s 00:08:50.334 sys 0m0.030s 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.334 09:36:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:50.334 ************************************ 00:08:50.334 END TEST dd_invalid_iflag 00:08:50.334 ************************************ 00:08:50.592 09:36:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:50.592 09:36:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.592 09:36:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.592 09:36:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:50.592 ************************************ 00:08:50.592 START TEST dd_unknown_flag 00:08:50.592 ************************************ 00:08:50.592 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:08:50.592 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:50.592 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:08:50.592 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:50.592 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.592 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.592 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.592 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.592 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.592 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.592 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.592 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:50.592 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:50.592 [2024-11-19 09:36:38.065217] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:50.592 [2024-11-19 09:36:38.065343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61896 ] 00:08:50.592 [2024-11-19 09:36:38.207454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.852 [2024-11-19 09:36:38.270782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.852 [2024-11-19 09:36:38.324943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.852 [2024-11-19 09:36:38.361221] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:50.852 [2024-11-19 09:36:38.361301] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.852 [2024-11-19 09:36:38.361357] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:50.852 [2024-11-19 09:36:38.361370] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.852 [2024-11-19 09:36:38.361615] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:50.852 [2024-11-19 09:36:38.361632] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.852 [2024-11-19 09:36:38.361675] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:50.852 [2024-11-19 09:36:38.361685] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:51.112 [2024-11-19 09:36:38.477593] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:51.112 00:08:51.112 real 0m0.538s 00:08:51.112 user 0m0.298s 00:08:51.112 sys 0m0.146s 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:51.112 ************************************ 00:08:51.112 END TEST dd_unknown_flag 00:08:51.112 ************************************ 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:51.112 ************************************ 00:08:51.112 START TEST dd_invalid_json 00:08:51.112 ************************************ 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:51.112 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:51.112 [2024-11-19 09:36:38.655585] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:51.112 [2024-11-19 09:36:38.655680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61919 ] 00:08:51.372 [2024-11-19 09:36:38.802994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.372 [2024-11-19 09:36:38.864326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.372 [2024-11-19 09:36:38.864409] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:51.372 [2024-11-19 09:36:38.864426] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:51.372 [2024-11-19 09:36:38.864462] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:51.372 [2024-11-19 09:36:38.864502] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:51.372 00:08:51.372 real 0m0.336s 00:08:51.372 user 0m0.173s 00:08:51.372 sys 0m0.062s 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:51.372 ************************************ 00:08:51.372 END TEST dd_invalid_json 00:08:51.372 ************************************ 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:51.372 ************************************ 00:08:51.372 START TEST dd_invalid_seek 00:08:51.372 ************************************ 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:51.372 09:36:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:51.631 { 00:08:51.631 "subsystems": [ 00:08:51.631 { 00:08:51.631 "subsystem": "bdev", 00:08:51.631 "config": [ 00:08:51.631 { 00:08:51.631 "params": { 00:08:51.631 "block_size": 512, 00:08:51.631 "num_blocks": 512, 00:08:51.631 "name": "malloc0" 00:08:51.631 }, 00:08:51.631 "method": "bdev_malloc_create" 00:08:51.631 }, 00:08:51.631 { 00:08:51.631 "params": { 00:08:51.631 "block_size": 512, 00:08:51.631 "num_blocks": 512, 00:08:51.631 "name": "malloc1" 00:08:51.631 }, 00:08:51.631 "method": "bdev_malloc_create" 00:08:51.631 }, 00:08:51.631 { 00:08:51.631 "method": "bdev_wait_for_examine" 00:08:51.631 } 00:08:51.631 ] 00:08:51.631 } 00:08:51.631 ] 00:08:51.631 } 00:08:51.631 [2024-11-19 09:36:39.047798] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:51.631 [2024-11-19 09:36:39.047887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61954 ] 00:08:51.631 [2024-11-19 09:36:39.196177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.631 [2024-11-19 09:36:39.253139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.890 [2024-11-19 09:36:39.307999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.890 [2024-11-19 09:36:39.369357] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:51.890 [2024-11-19 09:36:39.369421] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:51.890 [2024-11-19 09:36:39.484492] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:52.148 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:08:52.148 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:52.148 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:08:52.148 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:08:52.148 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:08:52.148 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:52.148 00:08:52.148 real 0m0.564s 00:08:52.148 user 0m0.364s 00:08:52.148 sys 0m0.155s 00:08:52.148 ************************************ 00:08:52.148 END TEST dd_invalid_seek 00:08:52.148 ************************************ 00:08:52.148 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.148 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:52.149 ************************************ 00:08:52.149 START TEST dd_invalid_skip 00:08:52.149 ************************************ 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:52.149 09:36:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:52.149 { 00:08:52.149 "subsystems": [ 00:08:52.149 { 00:08:52.149 "subsystem": "bdev", 00:08:52.149 "config": [ 00:08:52.149 { 00:08:52.149 "params": { 00:08:52.149 "block_size": 512, 00:08:52.149 "num_blocks": 512, 00:08:52.149 "name": "malloc0" 00:08:52.149 }, 00:08:52.149 "method": "bdev_malloc_create" 00:08:52.149 }, 00:08:52.149 { 00:08:52.149 "params": { 00:08:52.149 "block_size": 512, 00:08:52.149 "num_blocks": 512, 00:08:52.149 "name": "malloc1" 00:08:52.149 }, 00:08:52.149 "method": "bdev_malloc_create" 00:08:52.149 }, 00:08:52.149 { 00:08:52.149 "method": "bdev_wait_for_examine" 00:08:52.149 } 00:08:52.149 ] 00:08:52.149 } 00:08:52.149 ] 00:08:52.149 } 00:08:52.149 [2024-11-19 09:36:39.656845] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:52.149 [2024-11-19 09:36:39.656939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61982 ] 00:08:52.408 [2024-11-19 09:36:39.804640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.408 [2024-11-19 09:36:39.867887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.408 [2024-11-19 09:36:39.923866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.408 [2024-11-19 09:36:39.986017] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:52.408 [2024-11-19 09:36:39.986091] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:52.666 [2024-11-19 09:36:40.110996] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:52.666 00:08:52.666 real 0m0.580s 00:08:52.666 user 0m0.380s 00:08:52.666 sys 0m0.155s 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:52.666 ************************************ 00:08:52.666 END TEST dd_invalid_skip 00:08:52.666 ************************************ 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:52.666 ************************************ 00:08:52.666 START TEST dd_invalid_input_count 00:08:52.666 ************************************ 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:52.666 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:52.666 [2024-11-19 09:36:40.287837] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:52.666 [2024-11-19 09:36:40.287937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62021 ] 00:08:52.666 { 00:08:52.666 "subsystems": [ 00:08:52.666 { 00:08:52.666 "subsystem": "bdev", 00:08:52.666 "config": [ 00:08:52.666 { 00:08:52.666 "params": { 00:08:52.666 "block_size": 512, 00:08:52.666 "num_blocks": 512, 00:08:52.666 "name": "malloc0" 00:08:52.666 }, 00:08:52.666 "method": "bdev_malloc_create" 00:08:52.666 }, 00:08:52.666 { 00:08:52.666 "params": { 00:08:52.666 "block_size": 512, 00:08:52.667 "num_blocks": 512, 00:08:52.667 "name": "malloc1" 00:08:52.667 }, 00:08:52.667 "method": "bdev_malloc_create" 00:08:52.667 }, 00:08:52.667 { 00:08:52.667 "method": "bdev_wait_for_examine" 00:08:52.667 } 00:08:52.667 ] 00:08:52.667 } 00:08:52.667 ] 00:08:52.667 } 00:08:52.926 [2024-11-19 09:36:40.441239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.926 [2024-11-19 09:36:40.510748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.185 [2024-11-19 09:36:40.569725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.185 [2024-11-19 09:36:40.636963] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:53.185 [2024-11-19 09:36:40.637047] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:53.185 [2024-11-19 09:36:40.761182] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:53.445 00:08:53.445 real 0m0.602s 00:08:53.445 user 0m0.397s 00:08:53.445 sys 0m0.162s 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.445 ************************************ 00:08:53.445 END TEST dd_invalid_input_count 00:08:53.445 ************************************ 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:53.445 ************************************ 00:08:53.445 START TEST dd_invalid_output_count 00:08:53.445 ************************************ 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:53.445 09:36:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:53.445 { 00:08:53.445 "subsystems": [ 00:08:53.445 { 00:08:53.445 "subsystem": "bdev", 00:08:53.445 "config": [ 00:08:53.445 { 00:08:53.445 "params": { 00:08:53.445 "block_size": 512, 00:08:53.445 "num_blocks": 512, 00:08:53.445 "name": "malloc0" 00:08:53.445 }, 00:08:53.445 "method": "bdev_malloc_create" 00:08:53.445 }, 00:08:53.445 { 00:08:53.445 "method": "bdev_wait_for_examine" 00:08:53.445 } 00:08:53.445 ] 00:08:53.445 } 00:08:53.445 ] 00:08:53.445 } 00:08:53.445 [2024-11-19 09:36:40.938252] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:53.445 [2024-11-19 09:36:40.938352] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62059 ] 00:08:53.704 [2024-11-19 09:36:41.087996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.704 [2024-11-19 09:36:41.148613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.704 [2024-11-19 09:36:41.205001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.704 [2024-11-19 09:36:41.258817] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:53.704 [2024-11-19 09:36:41.258893] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:53.963 [2024-11-19 09:36:41.376817] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:53.963 00:08:53.963 real 0m0.562s 00:08:53.963 user 0m0.366s 00:08:53.963 sys 0m0.154s 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.963 ************************************ 00:08:53.963 END TEST dd_invalid_output_count 00:08:53.963 ************************************ 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:53.963 ************************************ 00:08:53.963 START TEST dd_bs_not_multiple 00:08:53.963 ************************************ 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:53.963 09:36:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:53.963 [2024-11-19 09:36:41.558763] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:53.963 [2024-11-19 09:36:41.558886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62086 ] 00:08:53.963 { 00:08:53.963 "subsystems": [ 00:08:53.963 { 00:08:53.963 "subsystem": "bdev", 00:08:53.963 "config": [ 00:08:53.963 { 00:08:53.963 "params": { 00:08:53.963 "block_size": 512, 00:08:53.963 "num_blocks": 512, 00:08:53.963 "name": "malloc0" 00:08:53.963 }, 00:08:53.963 "method": "bdev_malloc_create" 00:08:53.963 }, 00:08:53.963 { 00:08:53.963 "params": { 00:08:53.963 "block_size": 512, 00:08:53.963 "num_blocks": 512, 00:08:53.963 "name": "malloc1" 00:08:53.963 }, 00:08:53.963 "method": "bdev_malloc_create" 00:08:53.963 }, 00:08:53.963 { 00:08:53.963 "method": "bdev_wait_for_examine" 00:08:53.963 } 00:08:53.963 ] 00:08:53.963 } 00:08:53.963 ] 00:08:53.963 } 00:08:54.221 [2024-11-19 09:36:41.708726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.222 [2024-11-19 09:36:41.770344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.222 [2024-11-19 09:36:41.826186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.480 [2024-11-19 09:36:41.890000] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:54.480 [2024-11-19 09:36:41.890072] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:54.480 [2024-11-19 09:36:42.013649] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:54.480 09:36:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:08:54.480 09:36:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:54.480 09:36:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:08:54.480 09:36:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:08:54.480 09:36:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:08:54.480 09:36:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:54.480 00:08:54.480 real 0m0.588s 00:08:54.480 user 0m0.386s 00:08:54.480 sys 0m0.165s 00:08:54.480 09:36:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.480 ************************************ 00:08:54.480 END TEST dd_bs_not_multiple 00:08:54.480 ************************************ 00:08:54.480 09:36:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:54.737 00:08:54.737 real 0m6.630s 00:08:54.737 user 0m3.574s 00:08:54.737 sys 0m2.477s 00:08:54.737 09:36:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.737 ************************************ 00:08:54.737 END TEST spdk_dd_negative 00:08:54.737 ************************************ 00:08:54.737 09:36:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:54.737 00:08:54.737 real 1m19.252s 00:08:54.737 user 0m50.784s 00:08:54.737 sys 0m34.842s 00:08:54.737 09:36:42 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.737 09:36:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:54.738 ************************************ 00:08:54.738 END TEST spdk_dd 00:08:54.738 ************************************ 00:08:54.738 09:36:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:54.738 09:36:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:54.738 09:36:42 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:54.738 09:36:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.738 09:36:42 -- common/autotest_common.sh@10 -- # set +x 00:08:54.738 09:36:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:54.738 09:36:42 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:54.738 09:36:42 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:54.738 09:36:42 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:54.738 09:36:42 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:54.738 09:36:42 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:54.738 09:36:42 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:54.738 09:36:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:54.738 09:36:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.738 09:36:42 -- common/autotest_common.sh@10 -- # set +x 00:08:54.738 ************************************ 00:08:54.738 START TEST nvmf_tcp 00:08:54.738 ************************************ 00:08:54.738 09:36:42 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:54.738 * Looking for test storage... 00:08:54.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:54.738 09:36:42 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:54.738 09:36:42 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:54.738 09:36:42 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:54.996 09:36:42 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.997 09:36:42 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:54.997 09:36:42 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.997 09:36:42 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:54.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.997 --rc genhtml_branch_coverage=1 00:08:54.997 --rc genhtml_function_coverage=1 00:08:54.997 --rc genhtml_legend=1 00:08:54.997 --rc geninfo_all_blocks=1 00:08:54.997 --rc geninfo_unexecuted_blocks=1 00:08:54.997 00:08:54.997 ' 00:08:54.997 09:36:42 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:54.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.997 --rc genhtml_branch_coverage=1 00:08:54.997 --rc genhtml_function_coverage=1 00:08:54.997 --rc genhtml_legend=1 00:08:54.997 --rc geninfo_all_blocks=1 00:08:54.997 --rc geninfo_unexecuted_blocks=1 00:08:54.997 00:08:54.997 ' 00:08:54.997 09:36:42 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:54.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.997 --rc genhtml_branch_coverage=1 00:08:54.997 --rc genhtml_function_coverage=1 00:08:54.997 --rc genhtml_legend=1 00:08:54.997 --rc geninfo_all_blocks=1 00:08:54.997 --rc geninfo_unexecuted_blocks=1 00:08:54.997 00:08:54.997 ' 00:08:54.997 09:36:42 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:54.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.997 --rc genhtml_branch_coverage=1 00:08:54.997 --rc genhtml_function_coverage=1 00:08:54.997 --rc genhtml_legend=1 00:08:54.997 --rc geninfo_all_blocks=1 00:08:54.997 --rc geninfo_unexecuted_blocks=1 00:08:54.997 00:08:54.997 ' 00:08:54.997 09:36:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:54.997 09:36:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:54.997 09:36:42 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:54.997 09:36:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:54.997 09:36:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.997 09:36:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:54.997 ************************************ 00:08:54.997 START TEST nvmf_target_core 00:08:54.997 ************************************ 00:08:54.997 09:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:54.997 * Looking for test storage... 00:08:54.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:54.997 09:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:54.997 09:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:54.997 09:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.257 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.258 --rc genhtml_branch_coverage=1 00:08:55.258 --rc genhtml_function_coverage=1 00:08:55.258 --rc genhtml_legend=1 00:08:55.258 --rc geninfo_all_blocks=1 00:08:55.258 --rc geninfo_unexecuted_blocks=1 00:08:55.258 00:08:55.258 ' 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.258 --rc genhtml_branch_coverage=1 00:08:55.258 --rc genhtml_function_coverage=1 00:08:55.258 --rc genhtml_legend=1 00:08:55.258 --rc geninfo_all_blocks=1 00:08:55.258 --rc geninfo_unexecuted_blocks=1 00:08:55.258 00:08:55.258 ' 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.258 --rc genhtml_branch_coverage=1 00:08:55.258 --rc genhtml_function_coverage=1 00:08:55.258 --rc genhtml_legend=1 00:08:55.258 --rc geninfo_all_blocks=1 00:08:55.258 --rc geninfo_unexecuted_blocks=1 00:08:55.258 00:08:55.258 ' 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.258 --rc genhtml_branch_coverage=1 00:08:55.258 --rc genhtml_function_coverage=1 00:08:55.258 --rc genhtml_legend=1 00:08:55.258 --rc geninfo_all_blocks=1 00:08:55.258 --rc geninfo_unexecuted_blocks=1 00:08:55.258 00:08:55.258 ' 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.258 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:55.258 ************************************ 00:08:55.258 START TEST nvmf_host_management 00:08:55.258 ************************************ 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:55.258 * Looking for test storage... 00:08:55.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.258 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.258 --rc genhtml_branch_coverage=1 00:08:55.258 --rc genhtml_function_coverage=1 00:08:55.259 --rc genhtml_legend=1 00:08:55.259 --rc geninfo_all_blocks=1 00:08:55.259 --rc geninfo_unexecuted_blocks=1 00:08:55.259 00:08:55.259 ' 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.259 --rc genhtml_branch_coverage=1 00:08:55.259 --rc genhtml_function_coverage=1 00:08:55.259 --rc genhtml_legend=1 00:08:55.259 --rc geninfo_all_blocks=1 00:08:55.259 --rc geninfo_unexecuted_blocks=1 00:08:55.259 00:08:55.259 ' 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.259 --rc genhtml_branch_coverage=1 00:08:55.259 --rc genhtml_function_coverage=1 00:08:55.259 --rc genhtml_legend=1 00:08:55.259 --rc geninfo_all_blocks=1 00:08:55.259 --rc geninfo_unexecuted_blocks=1 00:08:55.259 00:08:55.259 ' 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.259 --rc genhtml_branch_coverage=1 00:08:55.259 --rc genhtml_function_coverage=1 00:08:55.259 --rc genhtml_legend=1 00:08:55.259 --rc geninfo_all_blocks=1 00:08:55.259 --rc geninfo_unexecuted_blocks=1 00:08:55.259 00:08:55.259 ' 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:55.259 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.518 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:55.518 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:55.519 Cannot find device "nvmf_init_br" 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:55.519 Cannot find device "nvmf_init_br2" 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:55.519 Cannot find device "nvmf_tgt_br" 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:55.519 Cannot find device "nvmf_tgt_br2" 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:55.519 Cannot find device "nvmf_init_br" 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:55.519 Cannot find device "nvmf_init_br2" 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:55.519 Cannot find device "nvmf_tgt_br" 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:55.519 Cannot find device "nvmf_tgt_br2" 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:55.519 Cannot find device "nvmf_br" 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:55.519 Cannot find device "nvmf_init_if" 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:55.519 09:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:55.519 Cannot find device "nvmf_init_if2" 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:55.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:55.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:55.519 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:55.778 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:55.778 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:08:55.778 00:08:55.778 --- 10.0.0.3 ping statistics --- 00:08:55.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.778 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:55.778 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:55.778 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:08:55.778 00:08:55.778 --- 10.0.0.4 ping statistics --- 00:08:55.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.778 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:55.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:08:55.778 00:08:55.778 --- 10.0.0.1 ping statistics --- 00:08:55.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.778 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:55.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:08:55.778 00:08:55.778 --- 10.0.0.2 ping statistics --- 00:08:55.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.778 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62428 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62428 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62428 ']' 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.778 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.778 [2024-11-19 09:36:43.392115] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:55.778 [2024-11-19 09:36:43.392251] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.037 [2024-11-19 09:36:43.549543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.037 [2024-11-19 09:36:43.624742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.037 [2024-11-19 09:36:43.624813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.037 [2024-11-19 09:36:43.624827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.037 [2024-11-19 09:36:43.624838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.037 [2024-11-19 09:36:43.624847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.037 [2024-11-19 09:36:43.626095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.037 [2024-11-19 09:36:43.626383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.037 [2024-11-19 09:36:43.626505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:56.037 [2024-11-19 09:36:43.626512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.296 [2024-11-19 09:36:43.687156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.296 [2024-11-19 09:36:43.806652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.296 Malloc0 00:08:56.296 [2024-11-19 09:36:43.889555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.296 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62475 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62475 /var/tmp/bdevperf.sock 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62475 ']' 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:56.555 { 00:08:56.555 "params": { 00:08:56.555 "name": "Nvme$subsystem", 00:08:56.555 "trtype": "$TEST_TRANSPORT", 00:08:56.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:56.555 "adrfam": "ipv4", 00:08:56.555 "trsvcid": "$NVMF_PORT", 00:08:56.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:56.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:56.555 "hdgst": ${hdgst:-false}, 00:08:56.555 "ddgst": ${ddgst:-false} 00:08:56.555 }, 00:08:56.555 "method": "bdev_nvme_attach_controller" 00:08:56.555 } 00:08:56.555 EOF 00:08:56.555 )") 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:56.555 09:36:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:56.555 "params": { 00:08:56.555 "name": "Nvme0", 00:08:56.555 "trtype": "tcp", 00:08:56.555 "traddr": "10.0.0.3", 00:08:56.555 "adrfam": "ipv4", 00:08:56.555 "trsvcid": "4420", 00:08:56.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:56.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:56.555 "hdgst": false, 00:08:56.555 "ddgst": false 00:08:56.555 }, 00:08:56.555 "method": "bdev_nvme_attach_controller" 00:08:56.555 }' 00:08:56.555 [2024-11-19 09:36:43.994652] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:56.555 [2024-11-19 09:36:43.994753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62475 ] 00:08:56.555 [2024-11-19 09:36:44.139289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.814 [2024-11-19 09:36:44.204064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.814 [2024-11-19 09:36:44.268856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.814 Running I/O for 10 seconds... 00:08:56.814 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.814 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:56.814 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:56.814 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.814 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:57.072 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.332 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.332 [2024-11-19 09:36:44.814651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.814695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.814718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.814729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.814742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.814752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.814764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.814773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.814786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.814796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.814807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.814817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.814829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.814839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.814851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.814861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.814873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.814883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.814894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.814904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.814917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.814927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.814938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.814948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.814960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.814970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.814988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.814998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.815010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.815020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.815031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.815041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.815053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.815063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.815074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.815092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.815104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.815113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.815125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.815135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.815147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.815157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.815168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.815178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.815190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.815200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.815225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.815237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.815252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.815261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.815273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.815284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.815304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.815313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.332 [2024-11-19 09:36:44.815325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.332 [2024-11-19 09:36:44.815334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.815985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.815996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.816005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.816017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.816026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.816037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.816047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.816058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.816068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.816079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.816089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.816100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.333 [2024-11-19 09:36:44.816109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.816120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f2d0 is same with the state(6) to be set 00:08:57.333 [2024-11-19 09:36:44.816286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.333 [2024-11-19 09:36:44.816304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.816325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.333 [2024-11-19 09:36:44.816336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.333 [2024-11-19 09:36:44.816346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.334 [2024-11-19 09:36:44.816356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.334 [2024-11-19 09:36:44.816366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.334 [2024-11-19 09:36:44.816375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.334 [2024-11-19 09:36:44.816385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1574ce0 is same with the state(6) to be set 00:08:57.334 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:57.334 00:08:57.334 Latency(us) 00:08:57.334 [2024-11-19T09:36:44.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.334 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:57.334 Job: Nvme0n1 ended in about 0.43 seconds with error 00:08:57.334 Verification LBA range: start 0x0 length 0x400 00:08:57.334 Nvme0n1 : 0.43 1482.42 92.65 148.24 0.00 37911.24 2219.29 39321.60 00:08:57.334 [2024-11-19T09:36:44.957Z] =================================================================================================================== 00:08:57.334 [2024-11-19T09:36:44.957Z] Total : 1482.42 92.65 148.24 0.00 37911.24 2219.29 39321.60 00:08:57.334 [2024-11-19 09:36:44.817505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:57.334 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.334 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:57.334 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.334 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.334 [2024-11-19 09:36:44.819431] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:57.334 [2024-11-19 09:36:44.819461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1574ce0 (9): Bad file descriptor 00:08:57.334 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.334 09:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:57.334 [2024-11-19 09:36:44.831835] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:58.269 09:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62475 00:08:58.269 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62475) - No such process 00:08:58.269 09:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:58.269 09:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:58.269 09:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:58.269 09:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:58.269 09:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:58.269 09:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:58.269 09:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:58.269 09:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:58.269 { 00:08:58.269 "params": { 00:08:58.269 "name": "Nvme$subsystem", 00:08:58.269 "trtype": "$TEST_TRANSPORT", 00:08:58.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:58.269 "adrfam": "ipv4", 00:08:58.269 "trsvcid": "$NVMF_PORT", 00:08:58.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:58.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:58.269 "hdgst": ${hdgst:-false}, 00:08:58.269 "ddgst": ${ddgst:-false} 00:08:58.269 }, 00:08:58.269 "method": "bdev_nvme_attach_controller" 00:08:58.269 } 00:08:58.269 EOF 00:08:58.269 )") 00:08:58.269 09:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:58.269 09:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:58.269 09:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:58.269 09:36:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:58.269 "params": { 00:08:58.269 "name": "Nvme0", 00:08:58.269 "trtype": "tcp", 00:08:58.269 "traddr": "10.0.0.3", 00:08:58.269 "adrfam": "ipv4", 00:08:58.269 "trsvcid": "4420", 00:08:58.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:58.269 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:58.269 "hdgst": false, 00:08:58.269 "ddgst": false 00:08:58.269 }, 00:08:58.269 "method": "bdev_nvme_attach_controller" 00:08:58.269 }' 00:08:58.269 [2024-11-19 09:36:45.889059] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:08:58.269 [2024-11-19 09:36:45.889158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62514 ] 00:08:58.528 [2024-11-19 09:36:46.040618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.528 [2024-11-19 09:36:46.101696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.786 [2024-11-19 09:36:46.167476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.786 Running I/O for 1 seconds... 00:08:59.722 1536.00 IOPS, 96.00 MiB/s 00:08:59.722 Latency(us) 00:08:59.722 [2024-11-19T09:36:47.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.722 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:59.722 Verification LBA range: start 0x0 length 0x400 00:08:59.722 Nvme0n1 : 1.04 1540.42 96.28 0.00 0.00 40739.79 4289.63 37653.41 00:08:59.722 [2024-11-19T09:36:47.345Z] =================================================================================================================== 00:08:59.722 [2024-11-19T09:36:47.345Z] Total : 1540.42 96.28 0.00 0.00 40739.79 4289.63 37653.41 00:08:59.981 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:59.981 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:59.981 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:59.981 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:59.981 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:59.981 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.981 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:59.981 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.981 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:59.981 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.981 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.981 rmmod nvme_tcp 00:09:00.239 rmmod nvme_fabrics 00:09:00.239 rmmod nvme_keyring 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62428 ']' 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62428 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62428 ']' 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62428 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62428 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62428' 00:09:00.239 killing process with pid 62428 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62428 00:09:00.239 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62428 00:09:00.498 [2024-11-19 09:36:47.896408] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:00.498 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:00.498 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:00.498 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:00.498 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:00.498 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:00.498 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:00.498 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:00.498 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:00.498 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:00.498 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:00.498 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:00.498 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:00.498 09:36:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:00.498 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:00.498 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:00.498 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:00.498 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:00.498 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:00.498 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:00.498 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:00.498 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:00.757 00:09:00.757 real 0m5.504s 00:09:00.757 user 0m19.550s 00:09:00.757 sys 0m1.499s 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.757 ************************************ 00:09:00.757 END TEST nvmf_host_management 00:09:00.757 ************************************ 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.757 ************************************ 00:09:00.757 START TEST nvmf_lvol 00:09:00.757 ************************************ 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:00.757 * Looking for test storage... 00:09:00.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:00.757 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.018 --rc genhtml_branch_coverage=1 00:09:01.018 --rc genhtml_function_coverage=1 00:09:01.018 --rc genhtml_legend=1 00:09:01.018 --rc geninfo_all_blocks=1 00:09:01.018 --rc geninfo_unexecuted_blocks=1 00:09:01.018 00:09:01.018 ' 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.018 --rc genhtml_branch_coverage=1 00:09:01.018 --rc genhtml_function_coverage=1 00:09:01.018 --rc genhtml_legend=1 00:09:01.018 --rc geninfo_all_blocks=1 00:09:01.018 --rc geninfo_unexecuted_blocks=1 00:09:01.018 00:09:01.018 ' 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.018 --rc genhtml_branch_coverage=1 00:09:01.018 --rc genhtml_function_coverage=1 00:09:01.018 --rc genhtml_legend=1 00:09:01.018 --rc geninfo_all_blocks=1 00:09:01.018 --rc geninfo_unexecuted_blocks=1 00:09:01.018 00:09:01.018 ' 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.018 --rc genhtml_branch_coverage=1 00:09:01.018 --rc genhtml_function_coverage=1 00:09:01.018 --rc genhtml_legend=1 00:09:01.018 --rc geninfo_all_blocks=1 00:09:01.018 --rc geninfo_unexecuted_blocks=1 00:09:01.018 00:09:01.018 ' 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.018 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.019 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:01.019 Cannot find device "nvmf_init_br" 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:01.019 Cannot find device "nvmf_init_br2" 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:01.019 Cannot find device "nvmf_tgt_br" 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:01.019 Cannot find device "nvmf_tgt_br2" 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:01.019 Cannot find device "nvmf_init_br" 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:01.019 Cannot find device "nvmf_init_br2" 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:01.019 Cannot find device "nvmf_tgt_br" 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:01.019 Cannot find device "nvmf_tgt_br2" 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:01.019 Cannot find device "nvmf_br" 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:01.019 Cannot find device "nvmf_init_if" 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:01.019 Cannot find device "nvmf_init_if2" 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:01.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:01.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:01.019 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:01.279 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:01.279 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:01.279 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:01.279 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:01.279 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:01.279 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:01.279 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:01.279 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:01.280 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:01.280 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:09:01.280 00:09:01.280 --- 10.0.0.3 ping statistics --- 00:09:01.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.280 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:01.280 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:01.280 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:09:01.280 00:09:01.280 --- 10.0.0.4 ping statistics --- 00:09:01.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.280 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:01.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:01.280 00:09:01.280 --- 10.0.0.1 ping statistics --- 00:09:01.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.280 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:01.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:09:01.280 00:09:01.280 --- 10.0.0.2 ping statistics --- 00:09:01.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.280 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62780 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62780 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62780 ']' 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.280 09:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:01.280 [2024-11-19 09:36:48.878470] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:09:01.280 [2024-11-19 09:36:48.878581] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.539 [2024-11-19 09:36:49.032735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:01.539 [2024-11-19 09:36:49.103094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.539 [2024-11-19 09:36:49.103164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.539 [2024-11-19 09:36:49.103178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.539 [2024-11-19 09:36:49.103189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.539 [2024-11-19 09:36:49.103198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.539 [2024-11-19 09:36:49.104489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.539 [2024-11-19 09:36:49.104596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.539 [2024-11-19 09:36:49.104602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.797 [2024-11-19 09:36:49.163408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.797 09:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.797 09:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:01.797 09:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:01.797 09:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:01.797 09:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:01.797 09:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.797 09:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:02.055 [2024-11-19 09:36:49.523905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.055 09:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.313 09:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:02.313 09:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.881 09:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:02.881 09:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:03.139 09:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:03.397 09:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0934762b-8bcb-4b3f-ac4e-1ff89668c232 00:09:03.397 09:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0934762b-8bcb-4b3f-ac4e-1ff89668c232 lvol 20 00:09:03.657 09:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8c21c3f1-775c-41b8-932b-6cc196a1cbfd 00:09:03.657 09:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:03.920 09:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8c21c3f1-775c-41b8-932b-6cc196a1cbfd 00:09:04.179 09:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:04.437 [2024-11-19 09:36:51.982021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:04.437 09:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:05.004 09:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62854 00:09:05.004 09:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:05.004 09:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:05.940 09:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 8c21c3f1-775c-41b8-932b-6cc196a1cbfd MY_SNAPSHOT 00:09:06.199 09:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f6d01a2f-95bf-4611-b075-752cfd2c642d 00:09:06.199 09:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 8c21c3f1-775c-41b8-932b-6cc196a1cbfd 30 00:09:06.458 09:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f6d01a2f-95bf-4611-b075-752cfd2c642d MY_CLONE 00:09:06.717 09:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a1aa2333-8d44-4f3a-889f-28a9e9a9d079 00:09:06.717 09:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate a1aa2333-8d44-4f3a-889f-28a9e9a9d079 00:09:07.285 09:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62854 00:09:15.473 Initializing NVMe Controllers 00:09:15.473 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:15.473 Controller IO queue size 128, less than required. 00:09:15.473 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:15.473 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:15.473 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:15.473 Initialization complete. Launching workers. 00:09:15.473 ======================================================== 00:09:15.473 Latency(us) 00:09:15.473 Device Information : IOPS MiB/s Average min max 00:09:15.473 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10046.40 39.24 12744.96 1470.08 79670.30 00:09:15.473 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9887.90 38.62 12947.14 3228.93 78352.23 00:09:15.473 ======================================================== 00:09:15.473 Total : 19934.30 77.87 12845.25 1470.08 79670.30 00:09:15.473 00:09:15.473 09:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:15.473 09:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8c21c3f1-775c-41b8-932b-6cc196a1cbfd 00:09:15.731 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0934762b-8bcb-4b3f-ac4e-1ff89668c232 00:09:15.990 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:15.990 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:15.990 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:15.990 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:15.990 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:15.990 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:15.990 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:15.990 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:15.990 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:15.990 rmmod nvme_tcp 00:09:15.990 rmmod nvme_fabrics 00:09:16.249 rmmod nvme_keyring 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62780 ']' 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62780 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62780 ']' 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62780 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62780 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.249 killing process with pid 62780 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62780' 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62780 00:09:16.249 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62780 00:09:16.507 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:16.507 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:16.507 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:16.507 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:16.507 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:16.507 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:16.507 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:16.507 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:16.507 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:16.507 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:16.507 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:16.507 09:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:16.507 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.507 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:16.507 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:16.507 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:16.507 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:16.507 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:16.507 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:16.507 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:09:16.765 00:09:16.765 real 0m15.960s 00:09:16.765 user 1m5.798s 00:09:16.765 sys 0m4.251s 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:16.765 ************************************ 00:09:16.765 END TEST nvmf_lvol 00:09:16.765 ************************************ 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.765 ************************************ 00:09:16.765 START TEST nvmf_lvs_grow 00:09:16.765 ************************************ 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:16.765 * Looking for test storage... 00:09:16.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.765 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:17.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.024 --rc genhtml_branch_coverage=1 00:09:17.024 --rc genhtml_function_coverage=1 00:09:17.024 --rc genhtml_legend=1 00:09:17.024 --rc geninfo_all_blocks=1 00:09:17.024 --rc geninfo_unexecuted_blocks=1 00:09:17.024 00:09:17.024 ' 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:17.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.024 --rc genhtml_branch_coverage=1 00:09:17.024 --rc genhtml_function_coverage=1 00:09:17.024 --rc genhtml_legend=1 00:09:17.024 --rc geninfo_all_blocks=1 00:09:17.024 --rc geninfo_unexecuted_blocks=1 00:09:17.024 00:09:17.024 ' 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:17.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.024 --rc genhtml_branch_coverage=1 00:09:17.024 --rc genhtml_function_coverage=1 00:09:17.024 --rc genhtml_legend=1 00:09:17.024 --rc geninfo_all_blocks=1 00:09:17.024 --rc geninfo_unexecuted_blocks=1 00:09:17.024 00:09:17.024 ' 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:17.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.024 --rc genhtml_branch_coverage=1 00:09:17.024 --rc genhtml_function_coverage=1 00:09:17.024 --rc genhtml_legend=1 00:09:17.024 --rc geninfo_all_blocks=1 00:09:17.024 --rc geninfo_unexecuted_blocks=1 00:09:17.024 00:09:17.024 ' 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.024 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:17.025 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:17.025 Cannot find device "nvmf_init_br" 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:17.025 Cannot find device "nvmf_init_br2" 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:17.025 Cannot find device "nvmf_tgt_br" 00:09:17.025 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:17.026 Cannot find device "nvmf_tgt_br2" 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:17.026 Cannot find device "nvmf_init_br" 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:17.026 Cannot find device "nvmf_init_br2" 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:17.026 Cannot find device "nvmf_tgt_br" 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:17.026 Cannot find device "nvmf_tgt_br2" 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:17.026 Cannot find device "nvmf_br" 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:17.026 Cannot find device "nvmf_init_if" 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:17.026 Cannot find device "nvmf_init_if2" 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:17.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:17.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:17.026 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:17.285 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:17.285 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:17.285 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:17.285 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:17.285 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:17.285 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:17.286 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:17.286 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:09:17.286 00:09:17.286 --- 10.0.0.3 ping statistics --- 00:09:17.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.286 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:17.286 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:17.286 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:09:17.286 00:09:17.286 --- 10.0.0.4 ping statistics --- 00:09:17.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.286 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:17.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:17.286 00:09:17.286 --- 10.0.0.1 ping statistics --- 00:09:17.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.286 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:17.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:09:17.286 00:09:17.286 --- 10.0.0.2 ping statistics --- 00:09:17.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.286 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63238 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63238 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63238 ']' 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.286 09:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:17.286 [2024-11-19 09:37:04.841475] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:09:17.286 [2024-11-19 09:37:04.841595] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.545 [2024-11-19 09:37:04.999013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.545 [2024-11-19 09:37:05.085640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.545 [2024-11-19 09:37:05.085727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.545 [2024-11-19 09:37:05.085756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.545 [2024-11-19 09:37:05.085770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.545 [2024-11-19 09:37:05.085782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.545 [2024-11-19 09:37:05.086275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.545 [2024-11-19 09:37:05.147609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.479 09:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.479 09:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:18.479 09:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:18.479 09:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.479 09:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:18.479 09:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.479 09:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:18.479 [2024-11-19 09:37:06.064591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.479 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:18.479 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.479 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.479 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:18.479 ************************************ 00:09:18.479 START TEST lvs_grow_clean 00:09:18.479 ************************************ 00:09:18.479 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:18.479 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:18.479 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:18.479 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:18.479 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:18.479 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:18.479 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:18.479 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:18.479 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:18.737 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.996 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:18.996 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:19.255 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3319fb36-b0d9-4556-9e51-1f69af14870d 00:09:19.255 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:19.255 09:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3319fb36-b0d9-4556-9e51-1f69af14870d 00:09:19.513 09:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:19.513 09:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:19.513 09:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3319fb36-b0d9-4556-9e51-1f69af14870d lvol 150 00:09:19.772 09:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=54592cfe-0f88-4ac0-a597-b054807c3a40 00:09:19.772 09:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:19.772 09:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:20.030 [2024-11-19 09:37:07.557103] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:20.030 [2024-11-19 09:37:07.557190] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:20.030 true 00:09:20.030 09:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3319fb36-b0d9-4556-9e51-1f69af14870d 00:09:20.030 09:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:20.289 09:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:20.289 09:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:20.857 09:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 54592cfe-0f88-4ac0-a597-b054807c3a40 00:09:20.857 09:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:21.116 [2024-11-19 09:37:08.665730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:21.116 09:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:21.375 09:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63326 00:09:21.375 09:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:21.375 09:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:21.375 09:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63326 /var/tmp/bdevperf.sock 00:09:21.375 09:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63326 ']' 00:09:21.375 09:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:21.375 09:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:21.375 09:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:21.375 09:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.375 09:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:21.375 [2024-11-19 09:37:08.980964] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:09:21.375 [2024-11-19 09:37:08.981053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63326 ] 00:09:21.633 [2024-11-19 09:37:09.125258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.633 [2024-11-19 09:37:09.186834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.633 [2024-11-19 09:37:09.241377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:21.892 09:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.892 09:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:21.892 09:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:22.152 Nvme0n1 00:09:22.153 09:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:22.411 [ 00:09:22.411 { 00:09:22.411 "name": "Nvme0n1", 00:09:22.411 "aliases": [ 00:09:22.411 "54592cfe-0f88-4ac0-a597-b054807c3a40" 00:09:22.412 ], 00:09:22.412 "product_name": "NVMe disk", 00:09:22.412 "block_size": 4096, 00:09:22.412 "num_blocks": 38912, 00:09:22.412 "uuid": "54592cfe-0f88-4ac0-a597-b054807c3a40", 00:09:22.412 "numa_id": -1, 00:09:22.412 "assigned_rate_limits": { 00:09:22.412 "rw_ios_per_sec": 0, 00:09:22.412 "rw_mbytes_per_sec": 0, 00:09:22.412 "r_mbytes_per_sec": 0, 00:09:22.412 "w_mbytes_per_sec": 0 00:09:22.412 }, 00:09:22.412 "claimed": false, 00:09:22.412 "zoned": false, 00:09:22.412 "supported_io_types": { 00:09:22.412 "read": true, 00:09:22.412 "write": true, 00:09:22.412 "unmap": true, 00:09:22.412 "flush": true, 00:09:22.412 "reset": true, 00:09:22.412 "nvme_admin": true, 00:09:22.412 "nvme_io": true, 00:09:22.412 "nvme_io_md": false, 00:09:22.412 "write_zeroes": true, 00:09:22.412 "zcopy": false, 00:09:22.412 "get_zone_info": false, 00:09:22.412 "zone_management": false, 00:09:22.412 "zone_append": false, 00:09:22.412 "compare": true, 00:09:22.412 "compare_and_write": true, 00:09:22.412 "abort": true, 00:09:22.412 "seek_hole": false, 00:09:22.412 "seek_data": false, 00:09:22.412 "copy": true, 00:09:22.412 "nvme_iov_md": false 00:09:22.412 }, 00:09:22.412 "memory_domains": [ 00:09:22.412 { 00:09:22.412 "dma_device_id": "system", 00:09:22.412 "dma_device_type": 1 00:09:22.412 } 00:09:22.412 ], 00:09:22.412 "driver_specific": { 00:09:22.412 "nvme": [ 00:09:22.412 { 00:09:22.412 "trid": { 00:09:22.412 "trtype": "TCP", 00:09:22.412 "adrfam": "IPv4", 00:09:22.412 "traddr": "10.0.0.3", 00:09:22.412 "trsvcid": "4420", 00:09:22.412 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:22.412 }, 00:09:22.412 "ctrlr_data": { 00:09:22.412 "cntlid": 1, 00:09:22.412 "vendor_id": "0x8086", 00:09:22.412 "model_number": "SPDK bdev Controller", 00:09:22.412 "serial_number": "SPDK0", 00:09:22.412 "firmware_revision": "25.01", 00:09:22.412 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:22.412 "oacs": { 00:09:22.412 "security": 0, 00:09:22.412 "format": 0, 00:09:22.412 "firmware": 0, 00:09:22.412 "ns_manage": 0 00:09:22.412 }, 00:09:22.412 "multi_ctrlr": true, 00:09:22.412 "ana_reporting": false 00:09:22.412 }, 00:09:22.412 "vs": { 00:09:22.412 "nvme_version": "1.3" 00:09:22.412 }, 00:09:22.412 "ns_data": { 00:09:22.412 "id": 1, 00:09:22.412 "can_share": true 00:09:22.412 } 00:09:22.412 } 00:09:22.412 ], 00:09:22.412 "mp_policy": "active_passive" 00:09:22.412 } 00:09:22.412 } 00:09:22.412 ] 00:09:22.412 09:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63342 00:09:22.412 09:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:22.412 09:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:22.671 Running I/O for 10 seconds... 00:09:23.606 Latency(us) 00:09:23.606 [2024-11-19T09:37:11.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.606 Nvme0n1 : 1.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:09:23.606 [2024-11-19T09:37:11.229Z] =================================================================================================================== 00:09:23.606 [2024-11-19T09:37:11.229Z] Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:09:23.606 00:09:24.541 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3319fb36-b0d9-4556-9e51-1f69af14870d 00:09:24.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.541 Nvme0n1 : 2.00 7212.00 28.17 0.00 0.00 0.00 0.00 0.00 00:09:24.541 [2024-11-19T09:37:12.164Z] =================================================================================================================== 00:09:24.541 [2024-11-19T09:37:12.164Z] Total : 7212.00 28.17 0.00 0.00 0.00 0.00 0.00 00:09:24.541 00:09:24.801 true 00:09:24.801 09:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3319fb36-b0d9-4556-9e51-1f69af14870d 00:09:24.801 09:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:25.086 09:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:25.086 09:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:25.086 09:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63342 00:09:25.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.655 Nvme0n1 : 3.00 7178.67 28.04 0.00 0.00 0.00 0.00 0.00 00:09:25.655 [2024-11-19T09:37:13.278Z] =================================================================================================================== 00:09:25.655 [2024-11-19T09:37:13.278Z] Total : 7178.67 28.04 0.00 0.00 0.00 0.00 0.00 00:09:25.655 00:09:26.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.591 Nvme0n1 : 4.00 7130.00 27.85 0.00 0.00 0.00 0.00 0.00 00:09:26.591 [2024-11-19T09:37:14.214Z] =================================================================================================================== 00:09:26.591 [2024-11-19T09:37:14.214Z] Total : 7130.00 27.85 0.00 0.00 0.00 0.00 0.00 00:09:26.591 00:09:27.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.528 Nvme0n1 : 5.00 7101.00 27.74 0.00 0.00 0.00 0.00 0.00 00:09:27.528 [2024-11-19T09:37:15.151Z] =================================================================================================================== 00:09:27.528 [2024-11-19T09:37:15.151Z] Total : 7101.00 27.74 0.00 0.00 0.00 0.00 0.00 00:09:27.528 00:09:28.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.904 Nvme0n1 : 6.00 7060.50 27.58 0.00 0.00 0.00 0.00 0.00 00:09:28.904 [2024-11-19T09:37:16.527Z] =================================================================================================================== 00:09:28.904 [2024-11-19T09:37:16.527Z] Total : 7060.50 27.58 0.00 0.00 0.00 0.00 0.00 00:09:28.904 00:09:29.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.909 Nvme0n1 : 7.00 7049.71 27.54 0.00 0.00 0.00 0.00 0.00 00:09:29.909 [2024-11-19T09:37:17.532Z] =================================================================================================================== 00:09:29.909 [2024-11-19T09:37:17.532Z] Total : 7049.71 27.54 0.00 0.00 0.00 0.00 0.00 00:09:29.909 00:09:30.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.491 Nvme0n1 : 8.00 7025.75 27.44 0.00 0.00 0.00 0.00 0.00 00:09:30.491 [2024-11-19T09:37:18.114Z] =================================================================================================================== 00:09:30.491 [2024-11-19T09:37:18.114Z] Total : 7025.75 27.44 0.00 0.00 0.00 0.00 0.00 00:09:30.491 00:09:31.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.867 Nvme0n1 : 9.00 7021.22 27.43 0.00 0.00 0.00 0.00 0.00 00:09:31.867 [2024-11-19T09:37:19.490Z] =================================================================================================================== 00:09:31.867 [2024-11-19T09:37:19.490Z] Total : 7021.22 27.43 0.00 0.00 0.00 0.00 0.00 00:09:31.867 00:09:32.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.803 Nvme0n1 : 10.00 7004.90 27.36 0.00 0.00 0.00 0.00 0.00 00:09:32.803 [2024-11-19T09:37:20.426Z] =================================================================================================================== 00:09:32.803 [2024-11-19T09:37:20.426Z] Total : 7004.90 27.36 0.00 0.00 0.00 0.00 0.00 00:09:32.803 00:09:32.803 00:09:32.803 Latency(us) 00:09:32.803 [2024-11-19T09:37:20.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.803 Nvme0n1 : 10.01 7009.04 27.38 0.00 0.00 18255.39 8579.26 57195.05 00:09:32.803 [2024-11-19T09:37:20.426Z] =================================================================================================================== 00:09:32.803 [2024-11-19T09:37:20.426Z] Total : 7009.04 27.38 0.00 0.00 18255.39 8579.26 57195.05 00:09:32.803 { 00:09:32.803 "results": [ 00:09:32.803 { 00:09:32.803 "job": "Nvme0n1", 00:09:32.803 "core_mask": "0x2", 00:09:32.803 "workload": "randwrite", 00:09:32.803 "status": "finished", 00:09:32.803 "queue_depth": 128, 00:09:32.803 "io_size": 4096, 00:09:32.803 "runtime": 10.012354, 00:09:32.803 "iops": 7009.04103071066, 00:09:32.803 "mibps": 27.379066526213517, 00:09:32.803 "io_failed": 0, 00:09:32.803 "io_timeout": 0, 00:09:32.803 "avg_latency_us": 18255.385439259433, 00:09:32.803 "min_latency_us": 8579.258181818182, 00:09:32.803 "max_latency_us": 57195.05454545454 00:09:32.803 } 00:09:32.803 ], 00:09:32.803 "core_count": 1 00:09:32.803 } 00:09:32.803 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63326 00:09:32.803 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63326 ']' 00:09:32.803 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63326 00:09:32.803 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:32.803 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.803 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63326 00:09:32.803 killing process with pid 63326 00:09:32.803 Received shutdown signal, test time was about 10.000000 seconds 00:09:32.803 00:09:32.803 Latency(us) 00:09:32.803 [2024-11-19T09:37:20.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.803 [2024-11-19T09:37:20.426Z] =================================================================================================================== 00:09:32.803 [2024-11-19T09:37:20.426Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:32.803 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:32.803 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:32.803 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63326' 00:09:32.803 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63326 00:09:32.803 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63326 00:09:32.803 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:33.062 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:33.321 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3319fb36-b0d9-4556-9e51-1f69af14870d 00:09:33.321 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:33.887 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:33.887 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:33.887 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:34.146 [2024-11-19 09:37:21.541830] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:34.146 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3319fb36-b0d9-4556-9e51-1f69af14870d 00:09:34.146 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:34.146 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3319fb36-b0d9-4556-9e51-1f69af14870d 00:09:34.146 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.146 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.146 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.146 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.146 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.146 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.146 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.146 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:34.146 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3319fb36-b0d9-4556-9e51-1f69af14870d 00:09:34.406 request: 00:09:34.406 { 00:09:34.406 "uuid": "3319fb36-b0d9-4556-9e51-1f69af14870d", 00:09:34.406 "method": "bdev_lvol_get_lvstores", 00:09:34.406 "req_id": 1 00:09:34.406 } 00:09:34.406 Got JSON-RPC error response 00:09:34.406 response: 00:09:34.406 { 00:09:34.406 "code": -19, 00:09:34.406 "message": "No such device" 00:09:34.406 } 00:09:34.406 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:34.406 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.406 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:34.406 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.406 09:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:34.664 aio_bdev 00:09:34.664 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 54592cfe-0f88-4ac0-a597-b054807c3a40 00:09:34.664 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=54592cfe-0f88-4ac0-a597-b054807c3a40 00:09:34.664 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.664 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:34.664 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.664 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.664 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:34.923 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54592cfe-0f88-4ac0-a597-b054807c3a40 -t 2000 00:09:35.182 [ 00:09:35.182 { 00:09:35.182 "name": "54592cfe-0f88-4ac0-a597-b054807c3a40", 00:09:35.182 "aliases": [ 00:09:35.182 "lvs/lvol" 00:09:35.182 ], 00:09:35.182 "product_name": "Logical Volume", 00:09:35.182 "block_size": 4096, 00:09:35.182 "num_blocks": 38912, 00:09:35.182 "uuid": "54592cfe-0f88-4ac0-a597-b054807c3a40", 00:09:35.182 "assigned_rate_limits": { 00:09:35.182 "rw_ios_per_sec": 0, 00:09:35.182 "rw_mbytes_per_sec": 0, 00:09:35.182 "r_mbytes_per_sec": 0, 00:09:35.182 "w_mbytes_per_sec": 0 00:09:35.182 }, 00:09:35.182 "claimed": false, 00:09:35.182 "zoned": false, 00:09:35.182 "supported_io_types": { 00:09:35.182 "read": true, 00:09:35.182 "write": true, 00:09:35.182 "unmap": true, 00:09:35.182 "flush": false, 00:09:35.182 "reset": true, 00:09:35.182 "nvme_admin": false, 00:09:35.182 "nvme_io": false, 00:09:35.182 "nvme_io_md": false, 00:09:35.182 "write_zeroes": true, 00:09:35.182 "zcopy": false, 00:09:35.182 "get_zone_info": false, 00:09:35.182 "zone_management": false, 00:09:35.182 "zone_append": false, 00:09:35.182 "compare": false, 00:09:35.182 "compare_and_write": false, 00:09:35.182 "abort": false, 00:09:35.182 "seek_hole": true, 00:09:35.182 "seek_data": true, 00:09:35.182 "copy": false, 00:09:35.182 "nvme_iov_md": false 00:09:35.182 }, 00:09:35.182 "driver_specific": { 00:09:35.182 "lvol": { 00:09:35.182 "lvol_store_uuid": "3319fb36-b0d9-4556-9e51-1f69af14870d", 00:09:35.182 "base_bdev": "aio_bdev", 00:09:35.183 "thin_provision": false, 00:09:35.183 "num_allocated_clusters": 38, 00:09:35.183 "snapshot": false, 00:09:35.183 "clone": false, 00:09:35.183 "esnap_clone": false 00:09:35.183 } 00:09:35.183 } 00:09:35.183 } 00:09:35.183 ] 00:09:35.183 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:35.183 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:35.183 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3319fb36-b0d9-4556-9e51-1f69af14870d 00:09:35.441 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:35.441 09:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3319fb36-b0d9-4556-9e51-1f69af14870d 00:09:35.441 09:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:35.700 09:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:35.700 09:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 54592cfe-0f88-4ac0-a597-b054807c3a40 00:09:35.968 09:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3319fb36-b0d9-4556-9e51-1f69af14870d 00:09:36.535 09:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:36.795 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:37.054 ************************************ 00:09:37.054 END TEST lvs_grow_clean 00:09:37.054 ************************************ 00:09:37.054 00:09:37.054 real 0m18.521s 00:09:37.054 user 0m17.356s 00:09:37.054 sys 0m2.568s 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:37.054 ************************************ 00:09:37.054 START TEST lvs_grow_dirty 00:09:37.054 ************************************ 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:37.054 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:37.312 09:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:37.571 09:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:37.571 09:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:37.830 09:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 00:09:37.830 09:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 00:09:37.830 09:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:38.089 09:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:38.089 09:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:38.089 09:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 lvol 150 00:09:38.348 09:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=69ac0e78-9be0-4a0a-a207-5577b29d8b0a 00:09:38.348 09:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:38.348 09:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:38.607 [2024-11-19 09:37:26.159134] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:38.607 [2024-11-19 09:37:26.159241] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:38.607 true 00:09:38.607 09:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 00:09:38.607 09:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:39.174 09:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:39.174 09:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:39.432 09:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 69ac0e78-9be0-4a0a-a207-5577b29d8b0a 00:09:39.693 09:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:39.951 [2024-11-19 09:37:27.399798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:39.951 09:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:40.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:40.210 09:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63608 00:09:40.210 09:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:40.210 09:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:40.210 09:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63608 /var/tmp/bdevperf.sock 00:09:40.210 09:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63608 ']' 00:09:40.210 09:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:40.210 09:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.210 09:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:40.210 09:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.210 09:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:40.210 [2024-11-19 09:37:27.789016] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:09:40.210 [2024-11-19 09:37:27.789335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63608 ] 00:09:40.469 [2024-11-19 09:37:27.940026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.469 [2024-11-19 09:37:28.009559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.469 [2024-11-19 09:37:28.067029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:41.404 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.404 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:41.404 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:41.662 Nvme0n1 00:09:41.662 09:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:41.921 [ 00:09:41.921 { 00:09:41.921 "name": "Nvme0n1", 00:09:41.921 "aliases": [ 00:09:41.921 "69ac0e78-9be0-4a0a-a207-5577b29d8b0a" 00:09:41.921 ], 00:09:41.921 "product_name": "NVMe disk", 00:09:41.921 "block_size": 4096, 00:09:41.921 "num_blocks": 38912, 00:09:41.921 "uuid": "69ac0e78-9be0-4a0a-a207-5577b29d8b0a", 00:09:41.921 "numa_id": -1, 00:09:41.921 "assigned_rate_limits": { 00:09:41.921 "rw_ios_per_sec": 0, 00:09:41.921 "rw_mbytes_per_sec": 0, 00:09:41.921 "r_mbytes_per_sec": 0, 00:09:41.921 "w_mbytes_per_sec": 0 00:09:41.921 }, 00:09:41.921 "claimed": false, 00:09:41.921 "zoned": false, 00:09:41.921 "supported_io_types": { 00:09:41.921 "read": true, 00:09:41.921 "write": true, 00:09:41.921 "unmap": true, 00:09:41.921 "flush": true, 00:09:41.921 "reset": true, 00:09:41.921 "nvme_admin": true, 00:09:41.921 "nvme_io": true, 00:09:41.921 "nvme_io_md": false, 00:09:41.921 "write_zeroes": true, 00:09:41.921 "zcopy": false, 00:09:41.921 "get_zone_info": false, 00:09:41.921 "zone_management": false, 00:09:41.921 "zone_append": false, 00:09:41.921 "compare": true, 00:09:41.921 "compare_and_write": true, 00:09:41.921 "abort": true, 00:09:41.921 "seek_hole": false, 00:09:41.921 "seek_data": false, 00:09:41.921 "copy": true, 00:09:41.921 "nvme_iov_md": false 00:09:41.921 }, 00:09:41.921 "memory_domains": [ 00:09:41.921 { 00:09:41.921 "dma_device_id": "system", 00:09:41.922 "dma_device_type": 1 00:09:41.922 } 00:09:41.922 ], 00:09:41.922 "driver_specific": { 00:09:41.922 "nvme": [ 00:09:41.922 { 00:09:41.922 "trid": { 00:09:41.922 "trtype": "TCP", 00:09:41.922 "adrfam": "IPv4", 00:09:41.922 "traddr": "10.0.0.3", 00:09:41.922 "trsvcid": "4420", 00:09:41.922 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:41.922 }, 00:09:41.922 "ctrlr_data": { 00:09:41.922 "cntlid": 1, 00:09:41.922 "vendor_id": "0x8086", 00:09:41.922 "model_number": "SPDK bdev Controller", 00:09:41.922 "serial_number": "SPDK0", 00:09:41.922 "firmware_revision": "25.01", 00:09:41.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:41.922 "oacs": { 00:09:41.922 "security": 0, 00:09:41.922 "format": 0, 00:09:41.922 "firmware": 0, 00:09:41.922 "ns_manage": 0 00:09:41.922 }, 00:09:41.922 "multi_ctrlr": true, 00:09:41.922 "ana_reporting": false 00:09:41.922 }, 00:09:41.922 "vs": { 00:09:41.922 "nvme_version": "1.3" 00:09:41.922 }, 00:09:41.922 "ns_data": { 00:09:41.922 "id": 1, 00:09:41.922 "can_share": true 00:09:41.922 } 00:09:41.922 } 00:09:41.922 ], 00:09:41.922 "mp_policy": "active_passive" 00:09:41.922 } 00:09:41.922 } 00:09:41.922 ] 00:09:41.922 09:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63626 00:09:41.922 09:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:41.922 09:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:42.180 Running I/O for 10 seconds... 00:09:43.116 Latency(us) 00:09:43.116 [2024-11-19T09:37:30.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.116 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:43.116 [2024-11-19T09:37:30.739Z] =================================================================================================================== 00:09:43.116 [2024-11-19T09:37:30.739Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:43.116 00:09:44.050 09:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 00:09:44.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.050 Nvme0n1 : 2.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:44.050 [2024-11-19T09:37:31.673Z] =================================================================================================================== 00:09:44.050 [2024-11-19T09:37:31.673Z] Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:44.050 00:09:44.309 true 00:09:44.309 09:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 00:09:44.309 09:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:44.875 09:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:44.875 09:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:44.875 09:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63626 00:09:45.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.133 Nvme0n1 : 3.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:45.133 [2024-11-19T09:37:32.756Z] =================================================================================================================== 00:09:45.133 [2024-11-19T09:37:32.756Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:45.133 00:09:46.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.069 Nvme0n1 : 4.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:46.069 [2024-11-19T09:37:33.692Z] =================================================================================================================== 00:09:46.069 [2024-11-19T09:37:33.692Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:46.069 00:09:47.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.005 Nvme0n1 : 5.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:47.005 [2024-11-19T09:37:34.628Z] =================================================================================================================== 00:09:47.005 [2024-11-19T09:37:34.628Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:47.005 00:09:48.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.379 Nvme0n1 : 6.00 6455.83 25.22 0.00 0.00 0.00 0.00 0.00 00:09:48.379 [2024-11-19T09:37:36.002Z] =================================================================================================================== 00:09:48.379 [2024-11-19T09:37:36.002Z] Total : 6455.83 25.22 0.00 0.00 0.00 0.00 0.00 00:09:48.379 00:09:49.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.371 Nvme0n1 : 7.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:49.371 [2024-11-19T09:37:36.994Z] =================================================================================================================== 00:09:49.371 [2024-11-19T09:37:36.994Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:49.371 00:09:50.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.308 Nvme0n1 : 8.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:50.308 [2024-11-19T09:37:37.931Z] =================================================================================================================== 00:09:50.308 [2024-11-19T09:37:37.931Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:50.308 00:09:51.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.248 Nvme0n1 : 9.00 6448.78 25.19 0.00 0.00 0.00 0.00 0.00 00:09:51.248 [2024-11-19T09:37:38.871Z] =================================================================================================================== 00:09:51.248 [2024-11-19T09:37:38.871Z] Total : 6448.78 25.19 0.00 0.00 0.00 0.00 0.00 00:09:51.248 00:09:52.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.183 Nvme0n1 : 10.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:52.183 [2024-11-19T09:37:39.806Z] =================================================================================================================== 00:09:52.183 [2024-11-19T09:37:39.806Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:52.183 00:09:52.183 00:09:52.183 Latency(us) 00:09:52.183 [2024-11-19T09:37:39.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.183 Nvme0n1 : 10.01 6417.18 25.07 0.00 0.00 19940.70 15073.28 265003.75 00:09:52.183 [2024-11-19T09:37:39.806Z] =================================================================================================================== 00:09:52.183 [2024-11-19T09:37:39.806Z] Total : 6417.18 25.07 0.00 0.00 19940.70 15073.28 265003.75 00:09:52.183 { 00:09:52.183 "results": [ 00:09:52.183 { 00:09:52.183 "job": "Nvme0n1", 00:09:52.183 "core_mask": "0x2", 00:09:52.183 "workload": "randwrite", 00:09:52.183 "status": "finished", 00:09:52.183 "queue_depth": 128, 00:09:52.183 "io_size": 4096, 00:09:52.183 "runtime": 10.014218, 00:09:52.183 "iops": 6417.176059079201, 00:09:52.183 "mibps": 25.06709398077813, 00:09:52.183 "io_failed": 0, 00:09:52.183 "io_timeout": 0, 00:09:52.183 "avg_latency_us": 19940.7005473247, 00:09:52.183 "min_latency_us": 15073.28, 00:09:52.183 "max_latency_us": 265003.75272727275 00:09:52.183 } 00:09:52.183 ], 00:09:52.183 "core_count": 1 00:09:52.183 } 00:09:52.183 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63608 00:09:52.183 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63608 ']' 00:09:52.183 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63608 00:09:52.183 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:52.183 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.183 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63608 00:09:52.183 killing process with pid 63608 00:09:52.183 Received shutdown signal, test time was about 10.000000 seconds 00:09:52.183 00:09:52.183 Latency(us) 00:09:52.183 [2024-11-19T09:37:39.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.183 [2024-11-19T09:37:39.806Z] =================================================================================================================== 00:09:52.183 [2024-11-19T09:37:39.806Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:52.183 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:52.183 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:52.183 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63608' 00:09:52.183 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63608 00:09:52.183 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63608 00:09:52.482 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:52.741 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:52.999 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:52.999 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 00:09:53.257 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:53.257 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:53.257 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63238 00:09:53.257 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63238 00:09:53.258 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63238 Killed "${NVMF_APP[@]}" "$@" 00:09:53.258 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:53.258 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:53.258 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:53.258 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.258 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:53.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.258 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63764 00:09:53.258 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63764 00:09:53.258 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63764 ']' 00:09:53.258 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.258 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:53.258 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.258 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.258 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.258 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:53.516 [2024-11-19 09:37:40.938512] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:09:53.516 [2024-11-19 09:37:40.938954] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.516 [2024-11-19 09:37:41.093232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.773 [2024-11-19 09:37:41.162620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.773 [2024-11-19 09:37:41.162677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.773 [2024-11-19 09:37:41.162705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.773 [2024-11-19 09:37:41.162712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.773 [2024-11-19 09:37:41.162719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.773 [2024-11-19 09:37:41.163142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.773 [2024-11-19 09:37:41.222849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:54.341 09:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.341 09:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:54.341 09:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:54.341 09:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.341 09:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:54.599 09:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.599 09:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:54.858 [2024-11-19 09:37:42.251877] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:54.858 [2024-11-19 09:37:42.252132] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:54.858 [2024-11-19 09:37:42.252381] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:54.858 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:54.858 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 69ac0e78-9be0-4a0a-a207-5577b29d8b0a 00:09:54.858 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=69ac0e78-9be0-4a0a-a207-5577b29d8b0a 00:09:54.858 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.858 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:54.858 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.858 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.858 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:55.117 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 69ac0e78-9be0-4a0a-a207-5577b29d8b0a -t 2000 00:09:55.375 [ 00:09:55.375 { 00:09:55.375 "name": "69ac0e78-9be0-4a0a-a207-5577b29d8b0a", 00:09:55.375 "aliases": [ 00:09:55.375 "lvs/lvol" 00:09:55.375 ], 00:09:55.375 "product_name": "Logical Volume", 00:09:55.375 "block_size": 4096, 00:09:55.375 "num_blocks": 38912, 00:09:55.375 "uuid": "69ac0e78-9be0-4a0a-a207-5577b29d8b0a", 00:09:55.375 "assigned_rate_limits": { 00:09:55.375 "rw_ios_per_sec": 0, 00:09:55.375 "rw_mbytes_per_sec": 0, 00:09:55.375 "r_mbytes_per_sec": 0, 00:09:55.375 "w_mbytes_per_sec": 0 00:09:55.375 }, 00:09:55.375 "claimed": false, 00:09:55.375 "zoned": false, 00:09:55.375 "supported_io_types": { 00:09:55.375 "read": true, 00:09:55.375 "write": true, 00:09:55.375 "unmap": true, 00:09:55.375 "flush": false, 00:09:55.375 "reset": true, 00:09:55.375 "nvme_admin": false, 00:09:55.375 "nvme_io": false, 00:09:55.375 "nvme_io_md": false, 00:09:55.376 "write_zeroes": true, 00:09:55.376 "zcopy": false, 00:09:55.376 "get_zone_info": false, 00:09:55.376 "zone_management": false, 00:09:55.376 "zone_append": false, 00:09:55.376 "compare": false, 00:09:55.376 "compare_and_write": false, 00:09:55.376 "abort": false, 00:09:55.376 "seek_hole": true, 00:09:55.376 "seek_data": true, 00:09:55.376 "copy": false, 00:09:55.376 "nvme_iov_md": false 00:09:55.376 }, 00:09:55.376 "driver_specific": { 00:09:55.376 "lvol": { 00:09:55.376 "lvol_store_uuid": "fb7c0147-485b-4d23-a1f2-ee9b8936ffa9", 00:09:55.376 "base_bdev": "aio_bdev", 00:09:55.376 "thin_provision": false, 00:09:55.376 "num_allocated_clusters": 38, 00:09:55.376 "snapshot": false, 00:09:55.376 "clone": false, 00:09:55.376 "esnap_clone": false 00:09:55.376 } 00:09:55.376 } 00:09:55.376 } 00:09:55.376 ] 00:09:55.376 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:55.376 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 00:09:55.376 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:55.635 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:55.635 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 00:09:55.635 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:55.893 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:55.893 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:56.467 [2024-11-19 09:37:43.809293] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:56.467 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 00:09:56.467 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:56.467 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 00:09:56.467 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.468 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.468 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.468 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.468 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.468 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.468 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.468 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:56.468 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 00:09:56.726 request: 00:09:56.727 { 00:09:56.727 "uuid": "fb7c0147-485b-4d23-a1f2-ee9b8936ffa9", 00:09:56.727 "method": "bdev_lvol_get_lvstores", 00:09:56.727 "req_id": 1 00:09:56.727 } 00:09:56.727 Got JSON-RPC error response 00:09:56.727 response: 00:09:56.727 { 00:09:56.727 "code": -19, 00:09:56.727 "message": "No such device" 00:09:56.727 } 00:09:56.727 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:56.727 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:56.727 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:56.727 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:56.727 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:56.986 aio_bdev 00:09:56.986 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 69ac0e78-9be0-4a0a-a207-5577b29d8b0a 00:09:56.986 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=69ac0e78-9be0-4a0a-a207-5577b29d8b0a 00:09:56.986 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.986 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:56.986 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.986 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.986 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:57.244 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 69ac0e78-9be0-4a0a-a207-5577b29d8b0a -t 2000 00:09:57.502 [ 00:09:57.502 { 00:09:57.502 "name": "69ac0e78-9be0-4a0a-a207-5577b29d8b0a", 00:09:57.502 "aliases": [ 00:09:57.502 "lvs/lvol" 00:09:57.502 ], 00:09:57.502 "product_name": "Logical Volume", 00:09:57.502 "block_size": 4096, 00:09:57.502 "num_blocks": 38912, 00:09:57.502 "uuid": "69ac0e78-9be0-4a0a-a207-5577b29d8b0a", 00:09:57.502 "assigned_rate_limits": { 00:09:57.502 "rw_ios_per_sec": 0, 00:09:57.502 "rw_mbytes_per_sec": 0, 00:09:57.502 "r_mbytes_per_sec": 0, 00:09:57.502 "w_mbytes_per_sec": 0 00:09:57.502 }, 00:09:57.502 "claimed": false, 00:09:57.502 "zoned": false, 00:09:57.502 "supported_io_types": { 00:09:57.502 "read": true, 00:09:57.502 "write": true, 00:09:57.502 "unmap": true, 00:09:57.502 "flush": false, 00:09:57.502 "reset": true, 00:09:57.502 "nvme_admin": false, 00:09:57.502 "nvme_io": false, 00:09:57.502 "nvme_io_md": false, 00:09:57.502 "write_zeroes": true, 00:09:57.502 "zcopy": false, 00:09:57.502 "get_zone_info": false, 00:09:57.502 "zone_management": false, 00:09:57.502 "zone_append": false, 00:09:57.502 "compare": false, 00:09:57.502 "compare_and_write": false, 00:09:57.502 "abort": false, 00:09:57.502 "seek_hole": true, 00:09:57.502 "seek_data": true, 00:09:57.502 "copy": false, 00:09:57.502 "nvme_iov_md": false 00:09:57.502 }, 00:09:57.502 "driver_specific": { 00:09:57.502 "lvol": { 00:09:57.502 "lvol_store_uuid": "fb7c0147-485b-4d23-a1f2-ee9b8936ffa9", 00:09:57.502 "base_bdev": "aio_bdev", 00:09:57.502 "thin_provision": false, 00:09:57.502 "num_allocated_clusters": 38, 00:09:57.502 "snapshot": false, 00:09:57.502 "clone": false, 00:09:57.502 "esnap_clone": false 00:09:57.502 } 00:09:57.502 } 00:09:57.502 } 00:09:57.502 ] 00:09:57.502 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:57.502 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:57.502 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 00:09:57.761 09:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:57.761 09:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 00:09:57.761 09:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:58.019 09:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:58.019 09:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 69ac0e78-9be0-4a0a-a207-5577b29d8b0a 00:09:58.277 09:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb7c0147-485b-4d23-a1f2-ee9b8936ffa9 00:09:58.845 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:59.103 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:59.361 ************************************ 00:09:59.361 END TEST lvs_grow_dirty 00:09:59.361 ************************************ 00:09:59.361 00:09:59.361 real 0m22.181s 00:09:59.361 user 0m45.776s 00:09:59.361 sys 0m8.096s 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:59.361 nvmf_trace.0 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:59.361 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:59.619 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.619 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:59.619 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.619 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.619 rmmod nvme_tcp 00:09:59.619 rmmod nvme_fabrics 00:09:59.619 rmmod nvme_keyring 00:09:59.619 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.619 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:59.620 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:59.620 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63764 ']' 00:09:59.620 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63764 00:09:59.620 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63764 ']' 00:09:59.620 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63764 00:09:59.620 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:59.620 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.620 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63764 00:09:59.878 killing process with pid 63764 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63764' 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63764 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63764 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:59.878 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:10:00.137 00:10:00.137 real 0m43.479s 00:10:00.137 user 1m10.386s 00:10:00.137 sys 0m11.558s 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.137 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:00.137 ************************************ 00:10:00.137 END TEST nvmf_lvs_grow 00:10:00.137 ************************************ 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.398 ************************************ 00:10:00.398 START TEST nvmf_bdev_io_wait 00:10:00.398 ************************************ 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:00.398 * Looking for test storage... 00:10:00.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:00.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.398 --rc genhtml_branch_coverage=1 00:10:00.398 --rc genhtml_function_coverage=1 00:10:00.398 --rc genhtml_legend=1 00:10:00.398 --rc geninfo_all_blocks=1 00:10:00.398 --rc geninfo_unexecuted_blocks=1 00:10:00.398 00:10:00.398 ' 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:00.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.398 --rc genhtml_branch_coverage=1 00:10:00.398 --rc genhtml_function_coverage=1 00:10:00.398 --rc genhtml_legend=1 00:10:00.398 --rc geninfo_all_blocks=1 00:10:00.398 --rc geninfo_unexecuted_blocks=1 00:10:00.398 00:10:00.398 ' 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:00.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.398 --rc genhtml_branch_coverage=1 00:10:00.398 --rc genhtml_function_coverage=1 00:10:00.398 --rc genhtml_legend=1 00:10:00.398 --rc geninfo_all_blocks=1 00:10:00.398 --rc geninfo_unexecuted_blocks=1 00:10:00.398 00:10:00.398 ' 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:00.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.398 --rc genhtml_branch_coverage=1 00:10:00.398 --rc genhtml_function_coverage=1 00:10:00.398 --rc genhtml_legend=1 00:10:00.398 --rc geninfo_all_blocks=1 00:10:00.398 --rc geninfo_unexecuted_blocks=1 00:10:00.398 00:10:00.398 ' 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.398 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.399 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.399 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:00.399 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:00.739 Cannot find device "nvmf_init_br" 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:00.739 Cannot find device "nvmf_init_br2" 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:00.739 Cannot find device "nvmf_tgt_br" 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:00.739 Cannot find device "nvmf_tgt_br2" 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:00.739 Cannot find device "nvmf_init_br" 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:00.739 Cannot find device "nvmf_init_br2" 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:00.739 Cannot find device "nvmf_tgt_br" 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:00.739 Cannot find device "nvmf_tgt_br2" 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:00.739 Cannot find device "nvmf_br" 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:00.739 Cannot find device "nvmf_init_if" 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:00.739 Cannot find device "nvmf_init_if2" 00:10:00.739 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:00.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:00.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:00.740 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:00.999 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:00.999 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:10:00.999 00:10:00.999 --- 10.0.0.3 ping statistics --- 00:10:00.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.999 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:00.999 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:00.999 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.109 ms 00:10:00.999 00:10:00.999 --- 10.0.0.4 ping statistics --- 00:10:00.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.999 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:00.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:10:00.999 00:10:00.999 --- 10.0.0.1 ping statistics --- 00:10:00.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.999 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:00.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:10:00.999 00:10:00.999 --- 10.0.0.2 ping statistics --- 00:10:00.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.999 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64142 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64142 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64142 ']' 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.999 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.999 [2024-11-19 09:37:48.517260] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:00.999 [2024-11-19 09:37:48.517369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.258 [2024-11-19 09:37:48.665628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.258 [2024-11-19 09:37:48.747671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.258 [2024-11-19 09:37:48.747744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.258 [2024-11-19 09:37:48.747757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.258 [2024-11-19 09:37:48.747766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.258 [2024-11-19 09:37:48.747773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.258 [2024-11-19 09:37:48.749276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.258 [2024-11-19 09:37:48.749367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.258 [2024-11-19 09:37:48.749514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.258 [2024-11-19 09:37:48.749523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.258 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.258 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:01.258 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.258 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:01.258 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.258 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.258 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:01.258 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.258 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.258 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.258 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:01.258 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.258 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.518 [2024-11-19 09:37:48.920458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.518 [2024-11-19 09:37:48.938035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.518 Malloc0 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.518 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.518 [2024-11-19 09:37:49.003611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64169 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64171 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:01.518 { 00:10:01.518 "params": { 00:10:01.518 "name": "Nvme$subsystem", 00:10:01.518 "trtype": "$TEST_TRANSPORT", 00:10:01.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.518 "adrfam": "ipv4", 00:10:01.518 "trsvcid": "$NVMF_PORT", 00:10:01.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.518 "hdgst": ${hdgst:-false}, 00:10:01.518 "ddgst": ${ddgst:-false} 00:10:01.518 }, 00:10:01.518 "method": "bdev_nvme_attach_controller" 00:10:01.518 } 00:10:01.518 EOF 00:10:01.518 )") 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64173 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:01.518 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:01.518 { 00:10:01.518 "params": { 00:10:01.518 "name": "Nvme$subsystem", 00:10:01.518 "trtype": "$TEST_TRANSPORT", 00:10:01.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.518 "adrfam": "ipv4", 00:10:01.518 "trsvcid": "$NVMF_PORT", 00:10:01.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.518 "hdgst": ${hdgst:-false}, 00:10:01.518 "ddgst": ${ddgst:-false} 00:10:01.518 }, 00:10:01.518 "method": "bdev_nvme_attach_controller" 00:10:01.518 } 00:10:01.518 EOF 00:10:01.518 )") 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64176 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:01.519 { 00:10:01.519 "params": { 00:10:01.519 "name": "Nvme$subsystem", 00:10:01.519 "trtype": "$TEST_TRANSPORT", 00:10:01.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.519 "adrfam": "ipv4", 00:10:01.519 "trsvcid": "$NVMF_PORT", 00:10:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.519 "hdgst": ${hdgst:-false}, 00:10:01.519 "ddgst": ${ddgst:-false} 00:10:01.519 }, 00:10:01.519 "method": "bdev_nvme_attach_controller" 00:10:01.519 } 00:10:01.519 EOF 00:10:01.519 )") 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:01.519 "params": { 00:10:01.519 "name": "Nvme1", 00:10:01.519 "trtype": "tcp", 00:10:01.519 "traddr": "10.0.0.3", 00:10:01.519 "adrfam": "ipv4", 00:10:01.519 "trsvcid": "4420", 00:10:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.519 "hdgst": false, 00:10:01.519 "ddgst": false 00:10:01.519 }, 00:10:01.519 "method": "bdev_nvme_attach_controller" 00:10:01.519 }' 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:01.519 "params": { 00:10:01.519 "name": "Nvme1", 00:10:01.519 "trtype": "tcp", 00:10:01.519 "traddr": "10.0.0.3", 00:10:01.519 "adrfam": "ipv4", 00:10:01.519 "trsvcid": "4420", 00:10:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.519 "hdgst": false, 00:10:01.519 "ddgst": false 00:10:01.519 }, 00:10:01.519 "method": "bdev_nvme_attach_controller" 00:10:01.519 }' 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:01.519 { 00:10:01.519 "params": { 00:10:01.519 "name": "Nvme$subsystem", 00:10:01.519 "trtype": "$TEST_TRANSPORT", 00:10:01.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.519 "adrfam": "ipv4", 00:10:01.519 "trsvcid": "$NVMF_PORT", 00:10:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.519 "hdgst": ${hdgst:-false}, 00:10:01.519 "ddgst": ${ddgst:-false} 00:10:01.519 }, 00:10:01.519 "method": "bdev_nvme_attach_controller" 00:10:01.519 } 00:10:01.519 EOF 00:10:01.519 )") 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:01.519 "params": { 00:10:01.519 "name": "Nvme1", 00:10:01.519 "trtype": "tcp", 00:10:01.519 "traddr": "10.0.0.3", 00:10:01.519 "adrfam": "ipv4", 00:10:01.519 "trsvcid": "4420", 00:10:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.519 "hdgst": false, 00:10:01.519 "ddgst": false 00:10:01.519 }, 00:10:01.519 "method": "bdev_nvme_attach_controller" 00:10:01.519 }' 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:01.519 "params": { 00:10:01.519 "name": "Nvme1", 00:10:01.519 "trtype": "tcp", 00:10:01.519 "traddr": "10.0.0.3", 00:10:01.519 "adrfam": "ipv4", 00:10:01.519 "trsvcid": "4420", 00:10:01.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.519 "hdgst": false, 00:10:01.519 "ddgst": false 00:10:01.519 }, 00:10:01.519 "method": "bdev_nvme_attach_controller" 00:10:01.519 }' 00:10:01.519 [2024-11-19 09:37:49.071017] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:01.519 [2024-11-19 09:37:49.071111] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:01.519 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64169 00:10:01.519 [2024-11-19 09:37:49.076039] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:01.519 [2024-11-19 09:37:49.076135] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:01.519 [2024-11-19 09:37:49.089306] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:01.519 [2024-11-19 09:37:49.089397] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:01.519 [2024-11-19 09:37:49.106075] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:01.519 [2024-11-19 09:37:49.106408] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:01.777 [2024-11-19 09:37:49.293812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.777 [2024-11-19 09:37:49.351257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:01.777 [2024-11-19 09:37:49.365967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.777 [2024-11-19 09:37:49.366762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.035 [2024-11-19 09:37:49.423436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:02.035 [2024-11-19 09:37:49.437360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.035 [2024-11-19 09:37:49.441184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.035 [2024-11-19 09:37:49.498518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:02.035 Running I/O for 1 seconds... 00:10:02.035 [2024-11-19 09:37:49.513348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.035 [2024-11-19 09:37:49.515538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.035 Running I/O for 1 seconds... 00:10:02.035 [2024-11-19 09:37:49.571804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:02.035 [2024-11-19 09:37:49.585660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.035 Running I/O for 1 seconds... 00:10:02.293 Running I/O for 1 seconds... 00:10:03.230 170944.00 IOPS, 667.75 MiB/s 00:10:03.230 Latency(us) 00:10:03.230 [2024-11-19T09:37:50.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.230 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:03.230 Nvme1n1 : 1.00 170606.18 666.43 0.00 0.00 746.31 359.33 3961.95 00:10:03.230 [2024-11-19T09:37:50.853Z] =================================================================================================================== 00:10:03.230 [2024-11-19T09:37:50.853Z] Total : 170606.18 666.43 0.00 0.00 746.31 359.33 3961.95 00:10:03.230 8273.00 IOPS, 32.32 MiB/s 00:10:03.230 Latency(us) 00:10:03.230 [2024-11-19T09:37:50.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.230 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:03.230 Nvme1n1 : 1.01 8312.04 32.47 0.00 0.00 15315.70 7983.48 23235.49 00:10:03.230 [2024-11-19T09:37:50.853Z] =================================================================================================================== 00:10:03.230 [2024-11-19T09:37:50.853Z] Total : 8312.04 32.47 0.00 0.00 15315.70 7983.48 23235.49 00:10:03.230 6150.00 IOPS, 24.02 MiB/s 00:10:03.230 Latency(us) 00:10:03.230 [2024-11-19T09:37:50.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.230 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:03.230 Nvme1n1 : 1.02 6206.58 24.24 0.00 0.00 20479.65 4468.36 27525.12 00:10:03.230 [2024-11-19T09:37:50.853Z] =================================================================================================================== 00:10:03.230 [2024-11-19T09:37:50.853Z] Total : 6206.58 24.24 0.00 0.00 20479.65 4468.36 27525.12 00:10:03.230 5480.00 IOPS, 21.41 MiB/s 00:10:03.230 Latency(us) 00:10:03.230 [2024-11-19T09:37:50.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.230 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:03.230 Nvme1n1 : 1.01 5545.67 21.66 0.00 0.00 22955.71 8579.26 36461.85 00:10:03.230 [2024-11-19T09:37:50.853Z] =================================================================================================================== 00:10:03.230 [2024-11-19T09:37:50.853Z] Total : 5545.67 21.66 0.00 0.00 22955.71 8579.26 36461.85 00:10:03.230 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64171 00:10:03.230 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64173 00:10:03.230 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64176 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.489 rmmod nvme_tcp 00:10:03.489 rmmod nvme_fabrics 00:10:03.489 rmmod nvme_keyring 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64142 ']' 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64142 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64142 ']' 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64142 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:03.489 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.489 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64142 00:10:03.489 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.489 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.489 killing process with pid 64142 00:10:03.489 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64142' 00:10:03.489 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64142 00:10:03.489 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64142 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:03.748 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:10:04.007 00:10:04.007 real 0m3.747s 00:10:04.007 user 0m14.514s 00:10:04.007 sys 0m2.271s 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.007 ************************************ 00:10:04.007 END TEST nvmf_bdev_io_wait 00:10:04.007 ************************************ 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.007 ************************************ 00:10:04.007 START TEST nvmf_queue_depth 00:10:04.007 ************************************ 00:10:04.007 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:04.266 * Looking for test storage... 00:10:04.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.266 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.267 --rc genhtml_branch_coverage=1 00:10:04.267 --rc genhtml_function_coverage=1 00:10:04.267 --rc genhtml_legend=1 00:10:04.267 --rc geninfo_all_blocks=1 00:10:04.267 --rc geninfo_unexecuted_blocks=1 00:10:04.267 00:10:04.267 ' 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.267 --rc genhtml_branch_coverage=1 00:10:04.267 --rc genhtml_function_coverage=1 00:10:04.267 --rc genhtml_legend=1 00:10:04.267 --rc geninfo_all_blocks=1 00:10:04.267 --rc geninfo_unexecuted_blocks=1 00:10:04.267 00:10:04.267 ' 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.267 --rc genhtml_branch_coverage=1 00:10:04.267 --rc genhtml_function_coverage=1 00:10:04.267 --rc genhtml_legend=1 00:10:04.267 --rc geninfo_all_blocks=1 00:10:04.267 --rc geninfo_unexecuted_blocks=1 00:10:04.267 00:10:04.267 ' 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.267 --rc genhtml_branch_coverage=1 00:10:04.267 --rc genhtml_function_coverage=1 00:10:04.267 --rc genhtml_legend=1 00:10:04.267 --rc geninfo_all_blocks=1 00:10:04.267 --rc geninfo_unexecuted_blocks=1 00:10:04.267 00:10:04.267 ' 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.267 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:04.267 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:04.268 Cannot find device "nvmf_init_br" 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:04.268 Cannot find device "nvmf_init_br2" 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:04.268 Cannot find device "nvmf_tgt_br" 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.268 Cannot find device "nvmf_tgt_br2" 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:04.268 Cannot find device "nvmf_init_br" 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:04.268 Cannot find device "nvmf_init_br2" 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:04.268 Cannot find device "nvmf_tgt_br" 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:04.268 Cannot find device "nvmf_tgt_br2" 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:10:04.268 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:04.526 Cannot find device "nvmf_br" 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:04.526 Cannot find device "nvmf_init_if" 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:04.526 Cannot find device "nvmf_init_if2" 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.526 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:04.526 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:04.526 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:04.526 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:04.784 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:04.784 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:04.784 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:04.784 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:04.784 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:04.784 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:04.784 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:04.784 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:04.784 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:10:04.784 00:10:04.784 --- 10.0.0.3 ping statistics --- 00:10:04.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.784 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:04.784 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:04.784 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:04.784 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:10:04.784 00:10:04.784 --- 10.0.0.4 ping statistics --- 00:10:04.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.784 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:04.784 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:04.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:04.784 00:10:04.784 --- 10.0.0.1 ping statistics --- 00:10:04.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.784 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:04.784 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:04.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:10:04.785 00:10:04.785 --- 10.0.0.2 ping statistics --- 00:10:04.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.785 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64434 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64434 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64434 ']' 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.785 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.785 [2024-11-19 09:37:52.277247] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:04.785 [2024-11-19 09:37:52.277975] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.043 [2024-11-19 09:37:52.437764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.043 [2024-11-19 09:37:52.507807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.043 [2024-11-19 09:37:52.507877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.043 [2024-11-19 09:37:52.507892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.043 [2024-11-19 09:37:52.507903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.043 [2024-11-19 09:37:52.507912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.043 [2024-11-19 09:37:52.508404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.043 [2024-11-19 09:37:52.567746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:05.043 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.043 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:05.043 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:05.043 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:05.043 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.302 [2024-11-19 09:37:52.694779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.302 Malloc0 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.302 [2024-11-19 09:37:52.748303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64459 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64459 /var/tmp/bdevperf.sock 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64459 ']' 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:05.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.302 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.302 [2024-11-19 09:37:52.812816] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:05.302 [2024-11-19 09:37:52.812936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64459 ] 00:10:05.561 [2024-11-19 09:37:52.965698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.561 [2024-11-19 09:37:53.030574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.561 [2024-11-19 09:37:53.088002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:05.561 09:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.561 09:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:05.561 09:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:05.561 09:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.561 09:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.820 NVMe0n1 00:10:05.820 09:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.820 09:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:05.820 Running I/O for 10 seconds... 00:10:08.133 6679.00 IOPS, 26.09 MiB/s [2024-11-19T09:37:56.692Z] 7186.00 IOPS, 28.07 MiB/s [2024-11-19T09:37:57.633Z] 7445.33 IOPS, 29.08 MiB/s [2024-11-19T09:37:58.567Z] 7523.00 IOPS, 29.39 MiB/s [2024-11-19T09:37:59.500Z] 7594.40 IOPS, 29.67 MiB/s [2024-11-19T09:38:00.436Z] 7702.50 IOPS, 30.09 MiB/s [2024-11-19T09:38:01.371Z] 7762.71 IOPS, 30.32 MiB/s [2024-11-19T09:38:02.747Z] 7814.38 IOPS, 30.52 MiB/s [2024-11-19T09:38:03.685Z] 7862.33 IOPS, 30.71 MiB/s [2024-11-19T09:38:03.685Z] 7918.00 IOPS, 30.93 MiB/s 00:10:16.062 Latency(us) 00:10:16.062 [2024-11-19T09:38:03.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.062 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:16.062 Verification LBA range: start 0x0 length 0x4000 00:10:16.062 NVMe0n1 : 10.07 7962.18 31.10 0.00 0.00 128015.05 12809.31 95801.72 00:10:16.062 [2024-11-19T09:38:03.685Z] =================================================================================================================== 00:10:16.062 [2024-11-19T09:38:03.685Z] Total : 7962.18 31.10 0.00 0.00 128015.05 12809.31 95801.72 00:10:16.062 { 00:10:16.062 "results": [ 00:10:16.062 { 00:10:16.062 "job": "NVMe0n1", 00:10:16.062 "core_mask": "0x1", 00:10:16.062 "workload": "verify", 00:10:16.062 "status": "finished", 00:10:16.062 "verify_range": { 00:10:16.062 "start": 0, 00:10:16.062 "length": 16384 00:10:16.062 }, 00:10:16.062 "queue_depth": 1024, 00:10:16.062 "io_size": 4096, 00:10:16.062 "runtime": 10.072113, 00:10:16.062 "iops": 7962.18231467419, 00:10:16.062 "mibps": 31.102274666696054, 00:10:16.062 "io_failed": 0, 00:10:16.062 "io_timeout": 0, 00:10:16.062 "avg_latency_us": 128015.048884415, 00:10:16.063 "min_latency_us": 12809.309090909092, 00:10:16.063 "max_latency_us": 95801.71636363637 00:10:16.063 } 00:10:16.063 ], 00:10:16.063 "core_count": 1 00:10:16.063 } 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64459 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64459 ']' 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64459 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64459 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.063 killing process with pid 64459 00:10:16.063 Received shutdown signal, test time was about 10.000000 seconds 00:10:16.063 00:10:16.063 Latency(us) 00:10:16.063 [2024-11-19T09:38:03.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.063 [2024-11-19T09:38:03.686Z] =================================================================================================================== 00:10:16.063 [2024-11-19T09:38:03.686Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64459' 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64459 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64459 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.063 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.321 rmmod nvme_tcp 00:10:16.321 rmmod nvme_fabrics 00:10:16.321 rmmod nvme_keyring 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64434 ']' 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64434 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64434 ']' 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64434 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64434 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:16.321 killing process with pid 64434 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64434' 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64434 00:10:16.321 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64434 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:16.580 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:10:16.838 ************************************ 00:10:16.838 END TEST nvmf_queue_depth 00:10:16.838 ************************************ 00:10:16.838 00:10:16.838 real 0m12.700s 00:10:16.838 user 0m21.499s 00:10:16.838 sys 0m2.239s 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:16.838 ************************************ 00:10:16.838 START TEST nvmf_target_multipath 00:10:16.838 ************************************ 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:16.838 * Looking for test storage... 00:10:16.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:16.838 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.097 --rc genhtml_branch_coverage=1 00:10:17.097 --rc genhtml_function_coverage=1 00:10:17.097 --rc genhtml_legend=1 00:10:17.097 --rc geninfo_all_blocks=1 00:10:17.097 --rc geninfo_unexecuted_blocks=1 00:10:17.097 00:10:17.097 ' 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.097 --rc genhtml_branch_coverage=1 00:10:17.097 --rc genhtml_function_coverage=1 00:10:17.097 --rc genhtml_legend=1 00:10:17.097 --rc geninfo_all_blocks=1 00:10:17.097 --rc geninfo_unexecuted_blocks=1 00:10:17.097 00:10:17.097 ' 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.097 --rc genhtml_branch_coverage=1 00:10:17.097 --rc genhtml_function_coverage=1 00:10:17.097 --rc genhtml_legend=1 00:10:17.097 --rc geninfo_all_blocks=1 00:10:17.097 --rc geninfo_unexecuted_blocks=1 00:10:17.097 00:10:17.097 ' 00:10:17.097 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.097 --rc genhtml_branch_coverage=1 00:10:17.097 --rc genhtml_function_coverage=1 00:10:17.097 --rc genhtml_legend=1 00:10:17.097 --rc geninfo_all_blocks=1 00:10:17.097 --rc geninfo_unexecuted_blocks=1 00:10:17.097 00:10:17.097 ' 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.098 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:17.098 Cannot find device "nvmf_init_br" 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:17.098 Cannot find device "nvmf_init_br2" 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:17.098 Cannot find device "nvmf_tgt_br" 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.098 Cannot find device "nvmf_tgt_br2" 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:17.098 Cannot find device "nvmf_init_br" 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:10:17.098 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:17.098 Cannot find device "nvmf_init_br2" 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:17.099 Cannot find device "nvmf_tgt_br" 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:17.099 Cannot find device "nvmf_tgt_br2" 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:17.099 Cannot find device "nvmf_br" 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:17.099 Cannot find device "nvmf_init_if" 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:17.099 Cannot find device "nvmf_init_if2" 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:17.099 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:17.357 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:17.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:17.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:10:17.358 00:10:17.358 --- 10.0.0.3 ping statistics --- 00:10:17.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.358 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:17.358 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:17.358 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:10:17.358 00:10:17.358 --- 10.0.0.4 ping statistics --- 00:10:17.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.358 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:17.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:10:17.358 00:10:17.358 --- 10.0.0.1 ping statistics --- 00:10:17.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.358 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:17.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:10:17.358 00:10:17.358 --- 10.0.0.2 ping statistics --- 00:10:17.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.358 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64825 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64825 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64825 ']' 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.358 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:17.616 [2024-11-19 09:38:05.039058] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:17.616 [2024-11-19 09:38:05.039147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.616 [2024-11-19 09:38:05.188538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.874 [2024-11-19 09:38:05.249452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.874 [2024-11-19 09:38:05.249505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.874 [2024-11-19 09:38:05.249516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.874 [2024-11-19 09:38:05.249525] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.874 [2024-11-19 09:38:05.249532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.874 [2024-11-19 09:38:05.250702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.874 [2024-11-19 09:38:05.250838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.874 [2024-11-19 09:38:05.250956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.874 [2024-11-19 09:38:05.250956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.874 [2024-11-19 09:38:05.305336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:17.874 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.874 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:10:17.874 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:17.874 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.874 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:17.874 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.874 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:18.132 [2024-11-19 09:38:05.700786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.132 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:18.390 Malloc0 00:10:18.390 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:18.647 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:18.905 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:19.162 [2024-11-19 09:38:06.746792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:19.162 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:10:19.419 [2024-11-19 09:38:07.002949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:10:19.419 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid=9203ba0c-8506-4f0b-a886-a7f874c4694c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:19.678 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid=9203ba0c-8506-4f0b-a886-a7f874c4694c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:10:19.678 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:19.678 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:10:19.678 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:19.678 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:19.678 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:22.209 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:22.210 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:22.210 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:22.210 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:22.210 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:22.210 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64911 00:10:22.210 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:22.210 09:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:22.210 [global] 00:10:22.210 thread=1 00:10:22.210 invalidate=1 00:10:22.210 rw=randrw 00:10:22.210 time_based=1 00:10:22.210 runtime=6 00:10:22.210 ioengine=libaio 00:10:22.210 direct=1 00:10:22.210 bs=4096 00:10:22.210 iodepth=128 00:10:22.210 norandommap=0 00:10:22.210 numjobs=1 00:10:22.210 00:10:22.210 verify_dump=1 00:10:22.210 verify_backlog=512 00:10:22.210 verify_state_save=0 00:10:22.210 do_verify=1 00:10:22.210 verify=crc32c-intel 00:10:22.210 [job0] 00:10:22.210 filename=/dev/nvme0n1 00:10:22.210 Could not set queue depth (nvme0n1) 00:10:22.210 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.210 fio-3.35 00:10:22.210 Starting 1 thread 00:10:22.776 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:23.035 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:23.602 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:23.602 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:23.602 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:23.603 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:23.603 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:23.603 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:23.603 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:23.603 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:23.603 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:23.603 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:23.603 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:23.603 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:23.603 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:23.861 09:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:24.120 09:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:24.120 09:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:24.120 09:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:24.120 09:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:24.120 09:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:24.120 09:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:24.120 09:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:24.120 09:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:24.120 09:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:24.120 09:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:24.120 09:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:24.120 09:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:24.120 09:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64911 00:10:28.324 00:10:28.324 job0: (groupid=0, jobs=1): err= 0: pid=64933: Tue Nov 19 09:38:15 2024 00:10:28.324 read: IOPS=10.3k, BW=40.3MiB/s (42.3MB/s)(242MiB/6003msec) 00:10:28.324 slat (usec): min=4, max=7144, avg=56.10, stdev=220.45 00:10:28.324 clat (usec): min=1766, max=15432, avg=8413.50, stdev=1377.25 00:10:28.324 lat (usec): min=1775, max=15459, avg=8469.60, stdev=1380.20 00:10:28.324 clat percentiles (usec): 00:10:28.324 | 1.00th=[ 4621], 5.00th=[ 6652], 10.00th=[ 7308], 20.00th=[ 7701], 00:10:28.324 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:10:28.324 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[11863], 00:10:28.324 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13960], 99.95th=[14222], 00:10:28.324 | 99.99th=[14615] 00:10:28.324 bw ( KiB/s): min= 6872, max=26552, per=51.64%, avg=21331.64, stdev=6992.79, samples=11 00:10:28.324 iops : min= 1718, max= 6638, avg=5332.91, stdev=1748.20, samples=11 00:10:28.324 write: IOPS=6220, BW=24.3MiB/s (25.5MB/s)(128MiB/5268msec); 0 zone resets 00:10:28.324 slat (usec): min=13, max=1907, avg=66.32, stdev=158.50 00:10:28.324 clat (usec): min=2694, max=13925, avg=7325.68, stdev=1209.41 00:10:28.324 lat (usec): min=2719, max=13943, avg=7392.00, stdev=1213.35 00:10:28.324 clat percentiles (usec): 00:10:28.324 | 1.00th=[ 3458], 5.00th=[ 4555], 10.00th=[ 5997], 20.00th=[ 6849], 00:10:28.324 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7701], 00:10:28.324 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8586], 00:10:28.324 | 99.00th=[11207], 99.50th=[11731], 99.90th=[12911], 99.95th=[13173], 00:10:28.324 | 99.99th=[13698] 00:10:28.324 bw ( KiB/s): min= 7288, max=26008, per=86.12%, avg=21427.73, stdev=6765.89, samples=11 00:10:28.324 iops : min= 1822, max= 6502, avg=5356.91, stdev=1691.47, samples=11 00:10:28.324 lat (msec) : 2=0.01%, 4=1.19%, 10=93.57%, 20=5.23% 00:10:28.324 cpu : usr=6.11%, sys=21.74%, ctx=5522, majf=0, minf=90 00:10:28.324 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:28.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.324 issued rwts: total=61995,32768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.324 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.324 00:10:28.324 Run status group 0 (all jobs): 00:10:28.324 READ: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=242MiB (254MB), run=6003-6003msec 00:10:28.324 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=128MiB (134MB), run=5268-5268msec 00:10:28.324 00:10:28.324 Disk stats (read/write): 00:10:28.324 nvme0n1: ios=61355/31978, merge=0/0, ticks=495508/219681, in_queue=715189, util=98.72% 00:10:28.324 09:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:28.324 09:38:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65014 00:10:28.891 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:28.891 [global] 00:10:28.891 thread=1 00:10:28.891 invalidate=1 00:10:28.891 rw=randrw 00:10:28.891 time_based=1 00:10:28.891 runtime=6 00:10:28.891 ioengine=libaio 00:10:28.891 direct=1 00:10:28.891 bs=4096 00:10:28.891 iodepth=128 00:10:28.891 norandommap=0 00:10:28.891 numjobs=1 00:10:28.891 00:10:28.891 verify_dump=1 00:10:28.891 verify_backlog=512 00:10:28.891 verify_state_save=0 00:10:28.891 do_verify=1 00:10:28.891 verify=crc32c-intel 00:10:28.891 [job0] 00:10:28.891 filename=/dev/nvme0n1 00:10:28.891 Could not set queue depth (nvme0n1) 00:10:28.891 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.891 fio-3.35 00:10:28.891 Starting 1 thread 00:10:29.823 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:30.081 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:30.339 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:30.339 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:30.339 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:30.339 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:30.339 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:30.339 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:30.339 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:30.339 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:30.339 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:30.339 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:30.339 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:30.339 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:30.339 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:30.597 09:38:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:30.855 09:38:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:30.855 09:38:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:30.855 09:38:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:30.855 09:38:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:30.855 09:38:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:30.855 09:38:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:30.855 09:38:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:30.855 09:38:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:30.855 09:38:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:30.855 09:38:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:30.855 09:38:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:30.855 09:38:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:30.855 09:38:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65014 00:10:35.039 00:10:35.039 job0: (groupid=0, jobs=1): err= 0: pid=65035: Tue Nov 19 09:38:22 2024 00:10:35.039 read: IOPS=11.4k, BW=44.4MiB/s (46.5MB/s)(266MiB/6005msec) 00:10:35.039 slat (usec): min=3, max=6769, avg=42.74, stdev=193.68 00:10:35.039 clat (usec): min=286, max=17357, avg=7686.38, stdev=2128.75 00:10:35.039 lat (usec): min=307, max=17399, avg=7729.12, stdev=2143.57 00:10:35.039 clat percentiles (usec): 00:10:35.039 | 1.00th=[ 2442], 5.00th=[ 3982], 10.00th=[ 4686], 20.00th=[ 5800], 00:10:35.039 | 30.00th=[ 6980], 40.00th=[ 7701], 50.00th=[ 8094], 60.00th=[ 8356], 00:10:35.039 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[10945], 00:10:35.039 | 99.00th=[13435], 99.50th=[13829], 99.90th=[15533], 99.95th=[16450], 00:10:35.039 | 99.99th=[16909] 00:10:35.039 bw ( KiB/s): min=14400, max=38828, per=53.10%, avg=24131.27, stdev=7820.56, samples=11 00:10:35.039 iops : min= 3600, max= 9707, avg=6032.82, stdev=1955.14, samples=11 00:10:35.039 write: IOPS=6800, BW=26.6MiB/s (27.9MB/s)(143MiB/5391msec); 0 zone resets 00:10:35.039 slat (usec): min=5, max=2555, avg=54.96, stdev=136.93 00:10:35.039 clat (usec): min=433, max=17733, avg=6497.66, stdev=1923.50 00:10:35.039 lat (usec): min=452, max=17758, avg=6552.62, stdev=1937.13 00:10:35.039 clat percentiles (usec): 00:10:35.039 | 1.00th=[ 2442], 5.00th=[ 3326], 10.00th=[ 3752], 20.00th=[ 4424], 00:10:35.039 | 30.00th=[ 5145], 40.00th=[ 6390], 50.00th=[ 7177], 60.00th=[ 7504], 00:10:35.039 | 70.00th=[ 7767], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 8717], 00:10:35.039 | 99.00th=[11076], 99.50th=[11994], 99.90th=[13698], 99.95th=[14222], 00:10:35.039 | 99.99th=[15533] 00:10:35.039 bw ( KiB/s): min=14888, max=38135, per=88.80%, avg=24153.36, stdev=7627.54, samples=11 00:10:35.039 iops : min= 3722, max= 9533, avg=6038.27, stdev=1906.75, samples=11 00:10:35.039 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.05% 00:10:35.039 lat (msec) : 2=0.42%, 4=7.39%, 10=87.05%, 20=5.04% 00:10:35.039 cpu : usr=6.49%, sys=23.98%, ctx=5895, majf=0, minf=90 00:10:35.039 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:35.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.039 issued rwts: total=68218,36659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.039 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.039 00:10:35.039 Run status group 0 (all jobs): 00:10:35.039 READ: bw=44.4MiB/s (46.5MB/s), 44.4MiB/s-44.4MiB/s (46.5MB/s-46.5MB/s), io=266MiB (279MB), run=6005-6005msec 00:10:35.039 WRITE: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=143MiB (150MB), run=5391-5391msec 00:10:35.039 00:10:35.039 Disk stats (read/write): 00:10:35.039 nvme0n1: ios=67306/36044, merge=0/0, ticks=491491/216944, in_queue=708435, util=98.60% 00:10:35.039 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:35.039 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:35.039 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:10:35.039 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:35.039 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.039 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:35.039 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.039 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:10:35.039 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.297 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:35.297 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:35.297 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:35.297 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:35.297 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:35.297 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:35.555 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.555 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:35.555 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.555 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.555 rmmod nvme_tcp 00:10:35.555 rmmod nvme_fabrics 00:10:35.555 rmmod nvme_keyring 00:10:35.555 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.555 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:35.555 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:35.555 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64825 ']' 00:10:35.555 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64825 00:10:35.555 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64825 ']' 00:10:35.555 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64825 00:10:35.555 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:10:35.555 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.555 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64825 00:10:35.555 killing process with pid 64825 00:10:35.555 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.555 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.555 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64825' 00:10:35.555 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64825 00:10:35.555 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64825 00:10:35.813 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:35.813 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:35.813 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:35.813 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:35.813 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:35.813 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:35.813 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:35.813 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.813 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:35.813 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:35.814 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:35.814 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:35.814 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.814 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:35.814 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:35.814 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:35.814 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:35.814 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:35.814 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:35.814 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:35.814 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:36.073 00:10:36.073 real 0m19.168s 00:10:36.073 user 1m11.007s 00:10:36.073 sys 0m9.698s 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:36.073 ************************************ 00:10:36.073 END TEST nvmf_target_multipath 00:10:36.073 ************************************ 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.073 ************************************ 00:10:36.073 START TEST nvmf_zcopy 00:10:36.073 ************************************ 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:36.073 * Looking for test storage... 00:10:36.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:36.073 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:36.332 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:36.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.333 --rc genhtml_branch_coverage=1 00:10:36.333 --rc genhtml_function_coverage=1 00:10:36.333 --rc genhtml_legend=1 00:10:36.333 --rc geninfo_all_blocks=1 00:10:36.333 --rc geninfo_unexecuted_blocks=1 00:10:36.333 00:10:36.333 ' 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:36.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.333 --rc genhtml_branch_coverage=1 00:10:36.333 --rc genhtml_function_coverage=1 00:10:36.333 --rc genhtml_legend=1 00:10:36.333 --rc geninfo_all_blocks=1 00:10:36.333 --rc geninfo_unexecuted_blocks=1 00:10:36.333 00:10:36.333 ' 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:36.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.333 --rc genhtml_branch_coverage=1 00:10:36.333 --rc genhtml_function_coverage=1 00:10:36.333 --rc genhtml_legend=1 00:10:36.333 --rc geninfo_all_blocks=1 00:10:36.333 --rc geninfo_unexecuted_blocks=1 00:10:36.333 00:10:36.333 ' 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:36.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.333 --rc genhtml_branch_coverage=1 00:10:36.333 --rc genhtml_function_coverage=1 00:10:36.333 --rc genhtml_legend=1 00:10:36.333 --rc geninfo_all_blocks=1 00:10:36.333 --rc geninfo_unexecuted_blocks=1 00:10:36.333 00:10:36.333 ' 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.333 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.333 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:36.334 Cannot find device "nvmf_init_br" 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:36.334 Cannot find device "nvmf_init_br2" 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:36.334 Cannot find device "nvmf_tgt_br" 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:36.334 Cannot find device "nvmf_tgt_br2" 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:36.334 Cannot find device "nvmf_init_br" 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:36.334 Cannot find device "nvmf_init_br2" 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:36.334 Cannot find device "nvmf_tgt_br" 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:36.334 Cannot find device "nvmf_tgt_br2" 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:36.334 Cannot find device "nvmf_br" 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:36.334 Cannot find device "nvmf_init_if" 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:36.334 Cannot find device "nvmf_init_if2" 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:36.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:36.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:36.334 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:36.592 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:36.592 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:36.592 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:36.592 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:36.592 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:36.850 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:36.850 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.149 ms 00:10:36.850 00:10:36.850 --- 10.0.0.3 ping statistics --- 00:10:36.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.850 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:36.850 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:36.850 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:10:36.850 00:10:36.850 --- 10.0.0.4 ping statistics --- 00:10:36.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.850 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:36.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:10:36.850 00:10:36.850 --- 10.0.0.1 ping statistics --- 00:10:36.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.850 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:36.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:10:36.850 00:10:36.850 --- 10.0.0.2 ping statistics --- 00:10:36.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.850 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65341 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65341 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65341 ']' 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.850 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.850 [2024-11-19 09:38:24.321969] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:36.850 [2024-11-19 09:38:24.322088] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.107 [2024-11-19 09:38:24.474662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.107 [2024-11-19 09:38:24.542989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.107 [2024-11-19 09:38:24.543077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.107 [2024-11-19 09:38:24.543092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.107 [2024-11-19 09:38:24.543102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.107 [2024-11-19 09:38:24.543111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.107 [2024-11-19 09:38:24.543592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.107 [2024-11-19 09:38:24.602833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:37.107 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.107 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:37.107 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:37.107 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.107 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.107 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.107 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:37.107 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:37.107 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.107 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.107 [2024-11-19 09:38:24.721466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.107 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.107 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:37.107 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.107 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.366 [2024-11-19 09:38:24.738005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.366 malloc0 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:37.366 { 00:10:37.366 "params": { 00:10:37.366 "name": "Nvme$subsystem", 00:10:37.366 "trtype": "$TEST_TRANSPORT", 00:10:37.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:37.366 "adrfam": "ipv4", 00:10:37.366 "trsvcid": "$NVMF_PORT", 00:10:37.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:37.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:37.366 "hdgst": ${hdgst:-false}, 00:10:37.366 "ddgst": ${ddgst:-false} 00:10:37.366 }, 00:10:37.366 "method": "bdev_nvme_attach_controller" 00:10:37.366 } 00:10:37.366 EOF 00:10:37.366 )") 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:37.366 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:37.366 "params": { 00:10:37.366 "name": "Nvme1", 00:10:37.366 "trtype": "tcp", 00:10:37.366 "traddr": "10.0.0.3", 00:10:37.366 "adrfam": "ipv4", 00:10:37.366 "trsvcid": "4420", 00:10:37.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:37.366 "hdgst": false, 00:10:37.366 "ddgst": false 00:10:37.366 }, 00:10:37.366 "method": "bdev_nvme_attach_controller" 00:10:37.366 }' 00:10:37.366 [2024-11-19 09:38:24.823292] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:37.366 [2024-11-19 09:38:24.823377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65366 ] 00:10:37.366 [2024-11-19 09:38:24.967375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.684 [2024-11-19 09:38:25.027776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.684 [2024-11-19 09:38:25.090403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:37.684 Running I/O for 10 seconds... 00:10:40.022 5687.00 IOPS, 44.43 MiB/s [2024-11-19T09:38:28.213Z] 5701.50 IOPS, 44.54 MiB/s [2024-11-19T09:38:29.586Z] 5724.33 IOPS, 44.72 MiB/s [2024-11-19T09:38:30.517Z] 5754.75 IOPS, 44.96 MiB/s [2024-11-19T09:38:31.452Z] 5772.80 IOPS, 45.10 MiB/s [2024-11-19T09:38:32.389Z] 5788.50 IOPS, 45.22 MiB/s [2024-11-19T09:38:33.325Z] 5764.57 IOPS, 45.04 MiB/s [2024-11-19T09:38:34.260Z] 5762.88 IOPS, 45.02 MiB/s [2024-11-19T09:38:35.635Z] 5777.67 IOPS, 45.14 MiB/s [2024-11-19T09:38:35.635Z] 5789.10 IOPS, 45.23 MiB/s 00:10:48.012 Latency(us) 00:10:48.012 [2024-11-19T09:38:35.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.012 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:48.012 Verification LBA range: start 0x0 length 0x1000 00:10:48.012 Nvme1n1 : 10.01 5792.07 45.25 0.00 0.00 22031.38 310.92 31933.91 00:10:48.012 [2024-11-19T09:38:35.635Z] =================================================================================================================== 00:10:48.012 [2024-11-19T09:38:35.635Z] Total : 5792.07 45.25 0.00 0.00 22031.38 310.92 31933.91 00:10:48.012 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65489 00:10:48.012 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:48.012 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:48.012 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:48.012 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:48.012 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.012 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:48.012 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:48.012 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:48.012 { 00:10:48.012 "params": { 00:10:48.012 "name": "Nvme$subsystem", 00:10:48.012 "trtype": "$TEST_TRANSPORT", 00:10:48.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:48.012 "adrfam": "ipv4", 00:10:48.012 "trsvcid": "$NVMF_PORT", 00:10:48.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:48.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:48.012 "hdgst": ${hdgst:-false}, 00:10:48.012 "ddgst": ${ddgst:-false} 00:10:48.012 }, 00:10:48.012 "method": "bdev_nvme_attach_controller" 00:10:48.012 } 00:10:48.012 EOF 00:10:48.012 )") 00:10:48.012 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:48.012 [2024-11-19 09:38:35.420503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.012 [2024-11-19 09:38:35.420551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.012 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:48.012 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:48.012 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:48.012 "params": { 00:10:48.012 "name": "Nvme1", 00:10:48.012 "trtype": "tcp", 00:10:48.012 "traddr": "10.0.0.3", 00:10:48.012 "adrfam": "ipv4", 00:10:48.012 "trsvcid": "4420", 00:10:48.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:48.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:48.012 "hdgst": false, 00:10:48.012 "ddgst": false 00:10:48.012 }, 00:10:48.012 "method": "bdev_nvme_attach_controller" 00:10:48.012 }' 00:10:48.012 [2024-11-19 09:38:35.432454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.012 [2024-11-19 09:38:35.432488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.012 [2024-11-19 09:38:35.444452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.012 [2024-11-19 09:38:35.444483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.012 [2024-11-19 09:38:35.456452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.012 [2024-11-19 09:38:35.456482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.012 [2024-11-19 09:38:35.468458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.012 [2024-11-19 09:38:35.468489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.013 [2024-11-19 09:38:35.471654] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:10:48.013 [2024-11-19 09:38:35.471744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65489 ] 00:10:48.013 [2024-11-19 09:38:35.480463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.013 [2024-11-19 09:38:35.480493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.013 [2024-11-19 09:38:35.492476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.013 [2024-11-19 09:38:35.492506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.013 [2024-11-19 09:38:35.504473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.013 [2024-11-19 09:38:35.504504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.013 [2024-11-19 09:38:35.516473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.013 [2024-11-19 09:38:35.516503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.013 [2024-11-19 09:38:35.528498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.013 [2024-11-19 09:38:35.528530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.013 [2024-11-19 09:38:35.540478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.013 [2024-11-19 09:38:35.540509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.013 [2024-11-19 09:38:35.552480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.013 [2024-11-19 09:38:35.552510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.013 [2024-11-19 09:38:35.564487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.013 [2024-11-19 09:38:35.564520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.013 [2024-11-19 09:38:35.576490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.013 [2024-11-19 09:38:35.576520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.013 [2024-11-19 09:38:35.588492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.013 [2024-11-19 09:38:35.588521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.013 [2024-11-19 09:38:35.600497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.013 [2024-11-19 09:38:35.600526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.013 [2024-11-19 09:38:35.612497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.013 [2024-11-19 09:38:35.612525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.013 [2024-11-19 09:38:35.619404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.013 [2024-11-19 09:38:35.624505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.013 [2024-11-19 09:38:35.624535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.636529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.636567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.648513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.648544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.660516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.660547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.672525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.672559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.678161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.271 [2024-11-19 09:38:35.684537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.684570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.696553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.696592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.708556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.708596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.720558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.720597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.732559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.732601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.739095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.271 [2024-11-19 09:38:35.744564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.744597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.756571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.756615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.768579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.768620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.780571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.780611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.792599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.792655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.804600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.804640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.271 [2024-11-19 09:38:35.816615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.271 [2024-11-19 09:38:35.816658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.272 [2024-11-19 09:38:35.828613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.272 [2024-11-19 09:38:35.828654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.272 [2024-11-19 09:38:35.840612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.272 [2024-11-19 09:38:35.840654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.272 [2024-11-19 09:38:35.852621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.272 [2024-11-19 09:38:35.852658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.272 Running I/O for 5 seconds... 00:10:48.272 [2024-11-19 09:38:35.868977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.272 [2024-11-19 09:38:35.869016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.272 [2024-11-19 09:38:35.878346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.272 [2024-11-19 09:38:35.878382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.272 [2024-11-19 09:38:35.894272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.272 [2024-11-19 09:38:35.894306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:35.911303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:35.911340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:35.927588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:35.927624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:35.943762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:35.943801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:35.960799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:35.960837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:35.975543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:35.975582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:35.992579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:35.992618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:36.009475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:36.009516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:36.026030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:36.026068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:36.042330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:36.042367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:36.059582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:36.059621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:36.076261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:36.076306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:36.093050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:36.093087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:36.110014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:36.110052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:36.126296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:36.126335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:36.142568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:36.142606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.530 [2024-11-19 09:38:36.151968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.530 [2024-11-19 09:38:36.152006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.789 [2024-11-19 09:38:36.168121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.789 [2024-11-19 09:38:36.168160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.789 [2024-11-19 09:38:36.184696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.789 [2024-11-19 09:38:36.184733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.789 [2024-11-19 09:38:36.201299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.789 [2024-11-19 09:38:36.201333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.789 [2024-11-19 09:38:36.219392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.789 [2024-11-19 09:38:36.219427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.789 [2024-11-19 09:38:36.234039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.789 [2024-11-19 09:38:36.234077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.789 [2024-11-19 09:38:36.249266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.789 [2024-11-19 09:38:36.249305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.789 [2024-11-19 09:38:36.259096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.789 [2024-11-19 09:38:36.259134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.789 [2024-11-19 09:38:36.275786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.789 [2024-11-19 09:38:36.275842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.789 [2024-11-19 09:38:36.290774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.790 [2024-11-19 09:38:36.290822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.790 [2024-11-19 09:38:36.300855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.790 [2024-11-19 09:38:36.300903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.790 [2024-11-19 09:38:36.316810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.790 [2024-11-19 09:38:36.316872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.790 [2024-11-19 09:38:36.331733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.790 [2024-11-19 09:38:36.331790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.790 [2024-11-19 09:38:36.347297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.790 [2024-11-19 09:38:36.347343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.790 [2024-11-19 09:38:36.365096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.790 [2024-11-19 09:38:36.365137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.790 [2024-11-19 09:38:36.379472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.790 [2024-11-19 09:38:36.379530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.790 [2024-11-19 09:38:36.397363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.790 [2024-11-19 09:38:36.397414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.790 [2024-11-19 09:38:36.412425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.790 [2024-11-19 09:38:36.412475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.052 [2024-11-19 09:38:36.422314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.052 [2024-11-19 09:38:36.422354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.052 [2024-11-19 09:38:36.437973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.052 [2024-11-19 09:38:36.438012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.052 [2024-11-19 09:38:36.455384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.052 [2024-11-19 09:38:36.455426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.052 [2024-11-19 09:38:36.470102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.052 [2024-11-19 09:38:36.470159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.052 [2024-11-19 09:38:36.485412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.052 [2024-11-19 09:38:36.485468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.052 [2024-11-19 09:38:36.502952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.052 [2024-11-19 09:38:36.503009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.052 [2024-11-19 09:38:36.518143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.052 [2024-11-19 09:38:36.518182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.052 [2024-11-19 09:38:36.527626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.052 [2024-11-19 09:38:36.527669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.052 [2024-11-19 09:38:36.543453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.052 [2024-11-19 09:38:36.543511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.052 [2024-11-19 09:38:36.560896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.053 [2024-11-19 09:38:36.560952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.053 [2024-11-19 09:38:36.575318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.053 [2024-11-19 09:38:36.575369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.053 [2024-11-19 09:38:36.593072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.053 [2024-11-19 09:38:36.593120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.053 [2024-11-19 09:38:36.607821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.053 [2024-11-19 09:38:36.607876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.053 [2024-11-19 09:38:36.623461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.053 [2024-11-19 09:38:36.623521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.053 [2024-11-19 09:38:36.632898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.053 [2024-11-19 09:38:36.632947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.053 [2024-11-19 09:38:36.648454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.053 [2024-11-19 09:38:36.648496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.053 [2024-11-19 09:38:36.664159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.053 [2024-11-19 09:38:36.664222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.315 [2024-11-19 09:38:36.680759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.315 [2024-11-19 09:38:36.680815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.315 [2024-11-19 09:38:36.698872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.315 [2024-11-19 09:38:36.698933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.315 [2024-11-19 09:38:36.714479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.315 [2024-11-19 09:38:36.714534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.315 [2024-11-19 09:38:36.733118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.315 [2024-11-19 09:38:36.733165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.315 [2024-11-19 09:38:36.748041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.315 [2024-11-19 09:38:36.748100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.315 [2024-11-19 09:38:36.758120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.315 [2024-11-19 09:38:36.758173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.315 [2024-11-19 09:38:36.773914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.315 [2024-11-19 09:38:36.773971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.315 [2024-11-19 09:38:36.788974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.315 [2024-11-19 09:38:36.789021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.315 [2024-11-19 09:38:36.799030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.315 [2024-11-19 09:38:36.799081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.315 [2024-11-19 09:38:36.814633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.315 [2024-11-19 09:38:36.814689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.315 [2024-11-19 09:38:36.830898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.316 [2024-11-19 09:38:36.830956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.316 [2024-11-19 09:38:36.847849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.316 [2024-11-19 09:38:36.847906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.316 11521.00 IOPS, 90.01 MiB/s [2024-11-19T09:38:36.939Z] [2024-11-19 09:38:36.863980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.316 [2024-11-19 09:38:36.864032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.316 [2024-11-19 09:38:36.881529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.316 [2024-11-19 09:38:36.881588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.316 [2024-11-19 09:38:36.897105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.316 [2024-11-19 09:38:36.897163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.316 [2024-11-19 09:38:36.906909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.316 [2024-11-19 09:38:36.906959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.316 [2024-11-19 09:38:36.922820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.316 [2024-11-19 09:38:36.922865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:36.940918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:36.940977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:36.955649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:36.955710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:36.970970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:36.971026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:36.980777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:36.980816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:36.996531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:36.996589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:37.012714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:37.012770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:37.022088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:37.022139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:37.038433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:37.038488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:37.055023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:37.055081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:37.072651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:37.072711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:37.088436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:37.088497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:37.106129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:37.106178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:37.120542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:37.120608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:37.136144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:37.136196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:37.154271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:37.154309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:37.169239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:37.169276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:37.179248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:37.179288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.575 [2024-11-19 09:38:37.195253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.575 [2024-11-19 09:38:37.195294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.210247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.210284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.227921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.227962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.242722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.242761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.251724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.251764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.268544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.268582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.285295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.285335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.302010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.302048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.319865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.319912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.334389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.334428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.350957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.351008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.366251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.366297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.376527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.376577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.391783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.391833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.408223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.408288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.424897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.424955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.834 [2024-11-19 09:38:37.441334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.834 [2024-11-19 09:38:37.441389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.458082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.458141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.474925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.474998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.489734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.489799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.505991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.506061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.522851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.522919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.539772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.539836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.556392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.556497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.572215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.572275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.582069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.582107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.597541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.597590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.608048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.608087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.623372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.623415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.639672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.639713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.656214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.656275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.672855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.672905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.690394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.690431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.706138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.706233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.722609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.722647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.128 [2024-11-19 09:38:37.739006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.128 [2024-11-19 09:38:37.739059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:37.755348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.755409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:37.771232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.771284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:37.789343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.789383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:37.805044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.805081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:37.822845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.822882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:37.837858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.837910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:37.854374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.854411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 11448.00 IOPS, 89.44 MiB/s [2024-11-19T09:38:38.010Z] [2024-11-19 09:38:37.870938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.870978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:37.886471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.886512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:37.905355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.905398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:37.920346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.920390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:37.935396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.935443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:37.950737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.950793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:37.966808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.966866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:37.985350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:37.985389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.387 [2024-11-19 09:38:38.000204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.387 [2024-11-19 09:38:38.000254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.015986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.016024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.033640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.033876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.049561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.049598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.066451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.066487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.083404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.083440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.099978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.100018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.115859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.115899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.125720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.125758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.142444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.142483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.158793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.158833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.176835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.177013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.191883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.192081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.201532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.201572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.217835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.217875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.234243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.234286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.244307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.244345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.645 [2024-11-19 09:38:38.255930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.645 [2024-11-19 09:38:38.255982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.271323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.271362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.287801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.287840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.305555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.305730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.320836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.321003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.330518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.330558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.345901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.345942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.364045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.364229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.379219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.379256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.389132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.389171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.404601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.404644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.421374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.421416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.438711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.438884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.454285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.454329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.464676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.464716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.479686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.479724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.496029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.496068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.904 [2024-11-19 09:38:38.513861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.904 [2024-11-19 09:38:38.514056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.162 [2024-11-19 09:38:38.528932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.162 [2024-11-19 09:38:38.529112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.162 [2024-11-19 09:38:38.544892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.162 [2024-11-19 09:38:38.544932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.555163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.555200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.570309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.570345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.587359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.587394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.603503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.603539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.621634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.621800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.637264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.637325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.653779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.653825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.670099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.670144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.687253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.687334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.705517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.705574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.720694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.720980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.736541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.736599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.754094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.754151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.769403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.769463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.163 [2024-11-19 09:38:38.779253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.163 [2024-11-19 09:38:38.779308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:38.795152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:38.795378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:38.811808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:38.811851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:38.829112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:38.829162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:38.845290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:38.845332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 11459.00 IOPS, 89.52 MiB/s [2024-11-19T09:38:39.044Z] [2024-11-19 09:38:38.861616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:38.861658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:38.878769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:38.878934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:38.894482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:38.894521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:38.911579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:38.911619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:38.929316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:38.929354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:38.944442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:38.944621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:38.962429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:38.962468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:38.977602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:38.977641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:38.994866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:38.994904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:39.011126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:39.011165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:39.028117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.421 [2024-11-19 09:38:39.028325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.421 [2024-11-19 09:38:39.043707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.043865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.061700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.061741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.076769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.076808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.089038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.089077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.106475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.106739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.121486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.121667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.137804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.137868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.153688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.153752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.163175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.163253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.175536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.175583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.190487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.190526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.205588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.205629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.215542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.215588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.230110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.230155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.245107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.245349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.261312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.261351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.278443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.278485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.680 [2024-11-19 09:38:39.294799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.680 [2024-11-19 09:38:39.294838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.938 [2024-11-19 09:38:39.313089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.938 [2024-11-19 09:38:39.313265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.938 [2024-11-19 09:38:39.328150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.328330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.337866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.337904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.349992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.350029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.365877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.365915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.384907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.384945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.399781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.399951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.417669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.417710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.432510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.432552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.447999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.448039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.467324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.467363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.482122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.482161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.491238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.491275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.507336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.507373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.516947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.516987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.533328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.533366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.939 [2024-11-19 09:38:39.550318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.939 [2024-11-19 09:38:39.550355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.566847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.566907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.585351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.585414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.600363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.600644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.610531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.610586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.626440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.626493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.644382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.644448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.659681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.659972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.670053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.670110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.685564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.685621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.702223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.702264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.718502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.718771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.735057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.735119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.753036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.753095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.767953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.768009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.784351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.784413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.801122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.801189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.197 [2024-11-19 09:38:39.817597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.197 [2024-11-19 09:38:39.817869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:39.834125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:39.834165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:39.851608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:39.851670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 11485.50 IOPS, 89.73 MiB/s [2024-11-19T09:38:40.079Z] [2024-11-19 09:38:39.866415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:39.866472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:39.881516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:39.881577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:39.897381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:39.897427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:39.906866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:39.906913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:39.923006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:39.923067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:39.940789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:39.941030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:39.956795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:39.956844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:39.974322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:39.974382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:39.989174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:39.989252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:40.007149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:40.007226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:40.022140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:40.022183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:40.032158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:40.032204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:40.047525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:40.047760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.456 [2024-11-19 09:38:40.063832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.456 [2024-11-19 09:38:40.063894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.081882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.081937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.096925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.096969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.107115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.107305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.122727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.122885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.138518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.138672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.154498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.154649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.172505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.172665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.187355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.187512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.203535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.203684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.221050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.221199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.235916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.236065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.252302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.252452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.270097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.270265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.285119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.285281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.294790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.294938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.310483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.310630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.715 [2024-11-19 09:38:40.329533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.715 [2024-11-19 09:38:40.329682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.344788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.344980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.363494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.363715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.378598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.378752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.394572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.394727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.412263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.412413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.427032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.427183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.442796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.442945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.460456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.460608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.476559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.476709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.492675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.492715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.510580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.510618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.525458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.525618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.540692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.540842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.550115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.550153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.566100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.566142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.974 [2024-11-19 09:38:40.582626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.974 [2024-11-19 09:38:40.582666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.598867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.598908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.618202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.618254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.633044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.633096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.643047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.643091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.658627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.658674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.675004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.675046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.691162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.691201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.708316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.708355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.724507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.724547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.742135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.742176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.757762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.757800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.767007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.767045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.783280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.783318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.792801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.792840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.808175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.808448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.824049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.824299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.232 [2024-11-19 09:38:40.840783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.232 [2024-11-19 09:38:40.840860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:40.857692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:40.857752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 11501.40 IOPS, 89.85 MiB/s 00:10:53.492 Latency(us) 00:10:53.492 [2024-11-19T09:38:41.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.492 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:53.492 Nvme1n1 : 5.01 11505.77 89.89 0.00 0.00 11113.00 3842.79 18230.92 00:10:53.492 [2024-11-19T09:38:41.115Z] =================================================================================================================== 00:10:53.492 [2024-11-19T09:38:41.115Z] Total : 11505.77 89.89 0.00 0.00 11113.00 3842.79 18230.92 00:10:53.492 [2024-11-19 09:38:40.869361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:40.869404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:40.881346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:40.881385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:40.893375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:40.893437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:40.905373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:40.905424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:40.917376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:40.917427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:40.929403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:40.929456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:40.941381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:40.941432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:40.953394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:40.953443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:40.965380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:40.965429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:40.977390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:40.977441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:40.989391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:40.989441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:41.001389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:41.001435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:41.013386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:41.013429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:41.025397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:41.025446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:41.037402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:41.037451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:41.049403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:41.049452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 [2024-11-19 09:38:41.061407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.492 [2024-11-19 09:38:41.061451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.492 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65489) - No such process 00:10:53.492 09:38:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65489 00:10:53.492 09:38:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.492 09:38:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.492 09:38:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.492 09:38:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.492 09:38:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:53.492 09:38:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.492 09:38:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.492 delay0 00:10:53.492 09:38:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.492 09:38:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:53.492 09:38:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.492 09:38:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.492 09:38:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.492 09:38:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:53.750 [2024-11-19 09:38:41.276260] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:00.308 Initializing NVMe Controllers 00:11:00.308 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:00.308 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:00.308 Initialization complete. Launching workers. 00:11:00.308 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 81 00:11:00.308 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 368, failed to submit 33 00:11:00.308 success 238, unsuccessful 130, failed 0 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.308 rmmod nvme_tcp 00:11:00.308 rmmod nvme_fabrics 00:11:00.308 rmmod nvme_keyring 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65341 ']' 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65341 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65341 ']' 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65341 00:11:00.308 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65341 00:11:00.309 killing process with pid 65341 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65341' 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65341 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65341 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:11:00.309 ************************************ 00:11:00.309 END TEST nvmf_zcopy 00:11:00.309 ************************************ 00:11:00.309 00:11:00.309 real 0m24.345s 00:11:00.309 user 0m39.523s 00:11:00.309 sys 0m7.027s 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.309 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:00.568 09:38:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:00.568 09:38:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.568 09:38:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.568 09:38:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.568 ************************************ 00:11:00.568 START TEST nvmf_nmic 00:11:00.568 ************************************ 00:11:00.568 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:00.568 * Looking for test storage... 00:11:00.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:00.568 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.569 --rc genhtml_branch_coverage=1 00:11:00.569 --rc genhtml_function_coverage=1 00:11:00.569 --rc genhtml_legend=1 00:11:00.569 --rc geninfo_all_blocks=1 00:11:00.569 --rc geninfo_unexecuted_blocks=1 00:11:00.569 00:11:00.569 ' 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.569 --rc genhtml_branch_coverage=1 00:11:00.569 --rc genhtml_function_coverage=1 00:11:00.569 --rc genhtml_legend=1 00:11:00.569 --rc geninfo_all_blocks=1 00:11:00.569 --rc geninfo_unexecuted_blocks=1 00:11:00.569 00:11:00.569 ' 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.569 --rc genhtml_branch_coverage=1 00:11:00.569 --rc genhtml_function_coverage=1 00:11:00.569 --rc genhtml_legend=1 00:11:00.569 --rc geninfo_all_blocks=1 00:11:00.569 --rc geninfo_unexecuted_blocks=1 00:11:00.569 00:11:00.569 ' 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.569 --rc genhtml_branch_coverage=1 00:11:00.569 --rc genhtml_function_coverage=1 00:11:00.569 --rc genhtml_legend=1 00:11:00.569 --rc geninfo_all_blocks=1 00:11:00.569 --rc geninfo_unexecuted_blocks=1 00:11:00.569 00:11:00.569 ' 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.569 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.569 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:00.828 Cannot find device "nvmf_init_br" 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:00.828 Cannot find device "nvmf_init_br2" 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:00.828 Cannot find device "nvmf_tgt_br" 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.828 Cannot find device "nvmf_tgt_br2" 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:00.828 Cannot find device "nvmf_init_br" 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:00.828 Cannot find device "nvmf_init_br2" 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:00.828 Cannot find device "nvmf_tgt_br" 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:00.828 Cannot find device "nvmf_tgt_br2" 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:00.828 Cannot find device "nvmf_br" 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:00.828 Cannot find device "nvmf_init_if" 00:11:00.828 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:00.829 Cannot find device "nvmf_init_if2" 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:00.829 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:01.088 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:01.088 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:11:01.088 00:11:01.088 --- 10.0.0.3 ping statistics --- 00:11:01.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.088 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:01.088 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:01.088 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:11:01.088 00:11:01.088 --- 10.0.0.4 ping statistics --- 00:11:01.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.088 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:01.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:01.088 00:11:01.088 --- 10.0.0.1 ping statistics --- 00:11:01.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.088 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:01.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:11:01.088 00:11:01.088 --- 10.0.0.2 ping statistics --- 00:11:01.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.088 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.088 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.089 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65869 00:11:01.089 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.089 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65869 00:11:01.089 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65869 ']' 00:11:01.089 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.089 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.089 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.089 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.089 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.089 [2024-11-19 09:38:48.697373] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:11:01.089 [2024-11-19 09:38:48.697727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.347 [2024-11-19 09:38:48.851404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.348 [2024-11-19 09:38:48.924132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.348 [2024-11-19 09:38:48.924416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.348 [2024-11-19 09:38:48.924565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.348 [2024-11-19 09:38:48.924583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.348 [2024-11-19 09:38:48.924592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.348 [2024-11-19 09:38:48.925847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.348 [2024-11-19 09:38:48.925994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.348 [2024-11-19 09:38:48.926121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.348 [2024-11-19 09:38:48.926133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.607 [2024-11-19 09:38:48.982933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.607 [2024-11-19 09:38:49.099204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.607 Malloc0 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.607 [2024-11-19 09:38:49.170705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:01.607 test case1: single bdev can't be used in multiple subsystems 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.607 [2024-11-19 09:38:49.194514] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:01.607 [2024-11-19 09:38:49.194561] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:01.607 [2024-11-19 09:38:49.194581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.607 request: 00:11:01.607 { 00:11:01.607 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:01.607 "namespace": { 00:11:01.607 "bdev_name": "Malloc0", 00:11:01.607 "no_auto_visible": false 00:11:01.607 }, 00:11:01.607 "method": "nvmf_subsystem_add_ns", 00:11:01.607 "req_id": 1 00:11:01.607 } 00:11:01.607 Got JSON-RPC error response 00:11:01.607 response: 00:11:01.607 { 00:11:01.607 "code": -32602, 00:11:01.607 "message": "Invalid parameters" 00:11:01.607 } 00:11:01.607 Adding namespace failed - expected result. 00:11:01.607 test case2: host connect to nvmf target in multiple paths 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.607 [2024-11-19 09:38:49.206651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.607 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid=9203ba0c-8506-4f0b-a886-a7f874c4694c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:01.866 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid=9203ba0c-8506-4f0b-a886-a7f874c4694c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:11:01.866 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:01.866 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:01.866 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:01.866 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:01.866 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:04.399 09:38:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:04.399 09:38:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:04.399 09:38:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.399 09:38:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:04.399 09:38:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.399 09:38:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:04.399 09:38:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:04.399 [global] 00:11:04.399 thread=1 00:11:04.399 invalidate=1 00:11:04.400 rw=write 00:11:04.400 time_based=1 00:11:04.400 runtime=1 00:11:04.400 ioengine=libaio 00:11:04.400 direct=1 00:11:04.400 bs=4096 00:11:04.400 iodepth=1 00:11:04.400 norandommap=0 00:11:04.400 numjobs=1 00:11:04.400 00:11:04.400 verify_dump=1 00:11:04.400 verify_backlog=512 00:11:04.400 verify_state_save=0 00:11:04.400 do_verify=1 00:11:04.400 verify=crc32c-intel 00:11:04.400 [job0] 00:11:04.400 filename=/dev/nvme0n1 00:11:04.400 Could not set queue depth (nvme0n1) 00:11:04.400 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.400 fio-3.35 00:11:04.400 Starting 1 thread 00:11:05.332 00:11:05.332 job0: (groupid=0, jobs=1): err= 0: pid=65953: Tue Nov 19 09:38:52 2024 00:11:05.332 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:05.332 slat (nsec): min=12693, max=54836, avg=17749.69, stdev=4270.82 00:11:05.332 clat (usec): min=144, max=1618, avg=198.20, stdev=39.33 00:11:05.332 lat (usec): min=162, max=1632, avg=215.95, stdev=39.83 00:11:05.332 clat percentiles (usec): 00:11:05.332 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 178], 00:11:05.332 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 200], 00:11:05.332 | 70.00th=[ 206], 80.00th=[ 217], 90.00th=[ 231], 95.00th=[ 247], 00:11:05.332 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 343], 99.95th=[ 510], 00:11:05.332 | 99.99th=[ 1614] 00:11:05.332 write: IOPS=3049, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec); 0 zone resets 00:11:05.332 slat (nsec): min=18568, max=81885, avg=24757.55, stdev=5548.18 00:11:05.332 clat (usec): min=89, max=606, avg=117.86, stdev=21.52 00:11:05.332 lat (usec): min=109, max=634, avg=142.62, stdev=22.99 00:11:05.332 clat percentiles (usec): 00:11:05.332 | 1.00th=[ 94], 5.00th=[ 98], 10.00th=[ 100], 20.00th=[ 105], 00:11:05.332 | 30.00th=[ 109], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 119], 00:11:05.332 | 70.00th=[ 123], 80.00th=[ 129], 90.00th=[ 139], 95.00th=[ 149], 00:11:05.332 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 355], 99.95th=[ 510], 00:11:05.333 | 99.99th=[ 611] 00:11:05.333 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:11:05.333 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:05.333 lat (usec) : 100=5.27%, 250=92.54%, 500=2.12%, 750=0.05% 00:11:05.333 lat (msec) : 2=0.02% 00:11:05.333 cpu : usr=2.80%, sys=9.30%, ctx=5613, majf=0, minf=5 00:11:05.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.333 issued rwts: total=2560,3053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.333 00:11:05.333 Run status group 0 (all jobs): 00:11:05.333 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:11:05.333 WRITE: bw=11.9MiB/s (12.5MB/s), 11.9MiB/s-11.9MiB/s (12.5MB/s-12.5MB/s), io=11.9MiB (12.5MB), run=1001-1001msec 00:11:05.333 00:11:05.333 Disk stats (read/write): 00:11:05.333 nvme0n1: ios=2459/2560, merge=0/0, ticks=503/328, in_queue=831, util=91.38% 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.333 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:05.333 rmmod nvme_tcp 00:11:05.333 rmmod nvme_fabrics 00:11:05.333 rmmod nvme_keyring 00:11:05.591 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.591 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:05.591 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:05.591 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65869 ']' 00:11:05.591 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65869 00:11:05.591 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65869 ']' 00:11:05.591 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65869 00:11:05.591 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:05.591 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.591 09:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65869 00:11:05.591 killing process with pid 65869 00:11:05.591 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.591 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.591 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65869' 00:11:05.591 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65869 00:11:05.591 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65869 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.850 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.107 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:11:06.107 00:11:06.107 real 0m5.543s 00:11:06.107 user 0m15.872s 00:11:06.107 sys 0m2.445s 00:11:06.107 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.107 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.107 ************************************ 00:11:06.107 END TEST nvmf_nmic 00:11:06.107 ************************************ 00:11:06.107 09:38:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:06.107 09:38:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.107 09:38:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.107 09:38:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:06.107 ************************************ 00:11:06.107 START TEST nvmf_fio_target 00:11:06.107 ************************************ 00:11:06.107 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:06.107 * Looking for test storage... 00:11:06.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:06.107 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:06.107 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:06.107 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:06.107 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:06.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.387 --rc genhtml_branch_coverage=1 00:11:06.387 --rc genhtml_function_coverage=1 00:11:06.387 --rc genhtml_legend=1 00:11:06.387 --rc geninfo_all_blocks=1 00:11:06.387 --rc geninfo_unexecuted_blocks=1 00:11:06.387 00:11:06.387 ' 00:11:06.387 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:06.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.387 --rc genhtml_branch_coverage=1 00:11:06.387 --rc genhtml_function_coverage=1 00:11:06.387 --rc genhtml_legend=1 00:11:06.387 --rc geninfo_all_blocks=1 00:11:06.387 --rc geninfo_unexecuted_blocks=1 00:11:06.387 00:11:06.387 ' 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:06.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.388 --rc genhtml_branch_coverage=1 00:11:06.388 --rc genhtml_function_coverage=1 00:11:06.388 --rc genhtml_legend=1 00:11:06.388 --rc geninfo_all_blocks=1 00:11:06.388 --rc geninfo_unexecuted_blocks=1 00:11:06.388 00:11:06.388 ' 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:06.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.388 --rc genhtml_branch_coverage=1 00:11:06.388 --rc genhtml_function_coverage=1 00:11:06.388 --rc genhtml_legend=1 00:11:06.388 --rc geninfo_all_blocks=1 00:11:06.388 --rc geninfo_unexecuted_blocks=1 00:11:06.388 00:11:06.388 ' 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.388 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:06.388 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:06.389 Cannot find device "nvmf_init_br" 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:06.389 Cannot find device "nvmf_init_br2" 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:06.389 Cannot find device "nvmf_tgt_br" 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:06.389 Cannot find device "nvmf_tgt_br2" 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:06.389 Cannot find device "nvmf_init_br" 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:06.389 Cannot find device "nvmf_init_br2" 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:06.389 Cannot find device "nvmf_tgt_br" 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:06.389 Cannot find device "nvmf_tgt_br2" 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:06.389 Cannot find device "nvmf_br" 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:06.389 Cannot find device "nvmf_init_if" 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:06.389 Cannot find device "nvmf_init_if2" 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:06.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:06.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:06.389 09:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:06.389 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:06.648 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:06.648 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:11:06.648 00:11:06.648 --- 10.0.0.3 ping statistics --- 00:11:06.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.648 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:06.648 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:06.648 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:11:06.648 00:11:06.648 --- 10.0.0.4 ping statistics --- 00:11:06.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.648 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:06.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:11:06.648 00:11:06.648 --- 10.0.0.1 ping statistics --- 00:11:06.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.648 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:06.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:11:06.648 00:11:06.648 --- 10.0.0.2 ping statistics --- 00:11:06.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.648 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66181 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66181 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66181 ']' 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.648 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.907 [2024-11-19 09:38:54.293730] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:11:06.907 [2024-11-19 09:38:54.293834] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.907 [2024-11-19 09:38:54.449509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.907 [2024-11-19 09:38:54.522672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.907 [2024-11-19 09:38:54.522734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.907 [2024-11-19 09:38:54.522747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.907 [2024-11-19 09:38:54.522758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.907 [2024-11-19 09:38:54.522768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.907 [2024-11-19 09:38:54.524014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.907 [2024-11-19 09:38:54.524197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.907 [2024-11-19 09:38:54.524271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.907 [2024-11-19 09:38:54.524274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.165 [2024-11-19 09:38:54.582663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:07.165 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.165 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:07.165 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.165 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.165 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.165 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.165 09:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:07.424 [2024-11-19 09:38:55.026260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.682 09:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.941 09:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:07.941 09:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.200 09:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:08.200 09:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.766 09:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:08.766 09:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.024 09:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:09.024 09:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:09.283 09:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.541 09:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:09.541 09:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.800 09:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:09.800 09:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.059 09:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:10.059 09:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:10.366 09:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:10.930 09:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:10.930 09:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:11.188 09:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:11.188 09:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:11.446 09:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:11.704 [2024-11-19 09:38:59.218341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:11.704 09:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:11.962 09:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:12.220 09:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid=9203ba0c-8506-4f0b-a886-a7f874c4694c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:12.478 09:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:12.478 09:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:12.478 09:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.478 09:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:12.478 09:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:12.479 09:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:14.449 09:39:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:14.449 09:39:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:14.449 09:39:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:14.449 09:39:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:14.449 09:39:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:14.449 09:39:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:14.449 09:39:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:14.449 [global] 00:11:14.449 thread=1 00:11:14.449 invalidate=1 00:11:14.449 rw=write 00:11:14.449 time_based=1 00:11:14.449 runtime=1 00:11:14.449 ioengine=libaio 00:11:14.449 direct=1 00:11:14.450 bs=4096 00:11:14.450 iodepth=1 00:11:14.450 norandommap=0 00:11:14.450 numjobs=1 00:11:14.450 00:11:14.450 verify_dump=1 00:11:14.450 verify_backlog=512 00:11:14.450 verify_state_save=0 00:11:14.450 do_verify=1 00:11:14.450 verify=crc32c-intel 00:11:14.450 [job0] 00:11:14.450 filename=/dev/nvme0n1 00:11:14.450 [job1] 00:11:14.450 filename=/dev/nvme0n2 00:11:14.450 [job2] 00:11:14.450 filename=/dev/nvme0n3 00:11:14.450 [job3] 00:11:14.450 filename=/dev/nvme0n4 00:11:14.450 Could not set queue depth (nvme0n1) 00:11:14.450 Could not set queue depth (nvme0n2) 00:11:14.450 Could not set queue depth (nvme0n3) 00:11:14.450 Could not set queue depth (nvme0n4) 00:11:14.709 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.709 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.709 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.709 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.709 fio-3.35 00:11:14.709 Starting 4 threads 00:11:16.087 00:11:16.087 job0: (groupid=0, jobs=1): err= 0: pid=66370: Tue Nov 19 09:39:03 2024 00:11:16.087 read: IOPS=1197, BW=4791KiB/s (4906kB/s)(4796KiB/1001msec) 00:11:16.087 slat (nsec): min=11925, max=94758, avg=22833.87, stdev=7074.88 00:11:16.087 clat (usec): min=181, max=7706, avg=480.54, stdev=326.06 00:11:16.087 lat (usec): min=200, max=7725, avg=503.38, stdev=326.51 00:11:16.087 clat percentiles (usec): 00:11:16.087 | 1.00th=[ 196], 5.00th=[ 219], 10.00th=[ 239], 20.00th=[ 416], 00:11:16.087 | 30.00th=[ 441], 40.00th=[ 457], 50.00th=[ 469], 60.00th=[ 482], 00:11:16.087 | 70.00th=[ 498], 80.00th=[ 537], 90.00th=[ 627], 95.00th=[ 668], 00:11:16.087 | 99.00th=[ 857], 99.50th=[ 1139], 99.90th=[ 6652], 99.95th=[ 7701], 00:11:16.087 | 99.99th=[ 7701] 00:11:16.087 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:16.087 slat (usec): min=14, max=121, avg=26.98, stdev=10.37 00:11:16.087 clat (usec): min=106, max=2937, avg=226.09, stdev=90.44 00:11:16.087 lat (usec): min=132, max=2971, avg=253.07, stdev=91.86 00:11:16.087 clat percentiles (usec): 00:11:16.087 | 1.00th=[ 129], 5.00th=[ 145], 10.00th=[ 155], 20.00th=[ 174], 00:11:16.087 | 30.00th=[ 196], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 229], 00:11:16.087 | 70.00th=[ 249], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 318], 00:11:16.087 | 99.00th=[ 416], 99.50th=[ 437], 99.90th=[ 750], 99.95th=[ 2933], 00:11:16.087 | 99.99th=[ 2933] 00:11:16.087 bw ( KiB/s): min= 8192, max= 8192, per=33.26%, avg=8192.00, stdev= 0.00, samples=1 00:11:16.087 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:16.087 lat (usec) : 250=45.12%, 500=42.27%, 750=11.74%, 1000=0.48% 00:11:16.087 lat (msec) : 2=0.22%, 4=0.11%, 10=0.07% 00:11:16.087 cpu : usr=1.70%, sys=5.70%, ctx=2746, majf=0, minf=7 00:11:16.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.087 issued rwts: total=1199,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.087 job1: (groupid=0, jobs=1): err= 0: pid=66371: Tue Nov 19 09:39:03 2024 00:11:16.087 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:16.087 slat (nsec): min=11020, max=93252, avg=21696.63, stdev=7563.20 00:11:16.087 clat (usec): min=319, max=763, avg=496.16, stdev=48.34 00:11:16.087 lat (usec): min=352, max=777, avg=517.86, stdev=48.40 00:11:16.087 clat percentiles (usec): 00:11:16.087 | 1.00th=[ 396], 5.00th=[ 429], 10.00th=[ 445], 20.00th=[ 457], 00:11:16.087 | 30.00th=[ 469], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 506], 00:11:16.087 | 70.00th=[ 519], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 578], 00:11:16.087 | 99.00th=[ 644], 99.50th=[ 652], 99.90th=[ 717], 99.95th=[ 766], 00:11:16.087 | 99.99th=[ 766] 00:11:16.087 write: IOPS=1178, BW=4715KiB/s (4828kB/s)(4720KiB/1001msec); 0 zone resets 00:11:16.087 slat (usec): min=18, max=128, avg=38.99, stdev=12.42 00:11:16.087 clat (usec): min=147, max=1376, avg=352.94, stdev=101.28 00:11:16.087 lat (usec): min=212, max=1459, avg=391.93, stdev=106.32 00:11:16.087 clat percentiles (usec): 00:11:16.087 | 1.00th=[ 200], 5.00th=[ 225], 10.00th=[ 245], 20.00th=[ 277], 00:11:16.087 | 30.00th=[ 293], 40.00th=[ 310], 50.00th=[ 334], 60.00th=[ 363], 00:11:16.087 | 70.00th=[ 392], 80.00th=[ 429], 90.00th=[ 478], 95.00th=[ 519], 00:11:16.087 | 99.00th=[ 652], 99.50th=[ 758], 99.90th=[ 1057], 99.95th=[ 1369], 00:11:16.087 | 99.99th=[ 1369] 00:11:16.087 bw ( KiB/s): min= 4744, max= 4744, per=19.26%, avg=4744.00, stdev= 0.00, samples=1 00:11:16.087 iops : min= 1186, max= 1186, avg=1186.00, stdev= 0.00, samples=1 00:11:16.087 lat (usec) : 250=6.35%, 500=70.19%, 750=23.14%, 1000=0.23% 00:11:16.087 lat (msec) : 2=0.09% 00:11:16.087 cpu : usr=1.80%, sys=5.50%, ctx=2224, majf=0, minf=9 00:11:16.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.087 issued rwts: total=1024,1180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.087 job2: (groupid=0, jobs=1): err= 0: pid=66372: Tue Nov 19 09:39:03 2024 00:11:16.087 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:16.087 slat (nsec): min=13184, max=51552, avg=15899.55, stdev=3302.31 00:11:16.087 clat (usec): min=175, max=417, avg=233.73, stdev=20.17 00:11:16.087 lat (usec): min=193, max=435, avg=249.63, stdev=20.95 00:11:16.087 clat percentiles (usec): 00:11:16.087 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:11:16.087 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 237], 00:11:16.087 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 269], 00:11:16.087 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 347], 99.95th=[ 351], 00:11:16.087 | 99.99th=[ 416] 00:11:16.087 write: IOPS=2420, BW=9682KiB/s (9915kB/s)(9692KiB/1001msec); 0 zone resets 00:11:16.087 slat (usec): min=17, max=101, avg=24.72, stdev= 5.85 00:11:16.087 clat (usec): min=114, max=1972, avg=173.42, stdev=59.01 00:11:16.087 lat (usec): min=136, max=1998, avg=198.14, stdev=60.00 00:11:16.087 clat percentiles (usec): 00:11:16.087 | 1.00th=[ 123], 5.00th=[ 135], 10.00th=[ 143], 20.00th=[ 151], 00:11:16.087 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 176], 00:11:16.087 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 206], 95.00th=[ 221], 00:11:16.087 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 1369], 99.95th=[ 1598], 00:11:16.087 | 99.99th=[ 1975] 00:11:16.087 bw ( KiB/s): min= 9064, max= 9064, per=36.80%, avg=9064.00, stdev= 0.00, samples=1 00:11:16.087 iops : min= 2266, max= 2266, avg=2266.00, stdev= 0.00, samples=1 00:11:16.087 lat (usec) : 250=90.92%, 500=9.01% 00:11:16.087 lat (msec) : 2=0.07% 00:11:16.087 cpu : usr=2.00%, sys=7.10%, ctx=4472, majf=0, minf=9 00:11:16.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.087 issued rwts: total=2048,2423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.087 job3: (groupid=0, jobs=1): err= 0: pid=66373: Tue Nov 19 09:39:03 2024 00:11:16.087 read: IOPS=1003, BW=4016KiB/s (4112kB/s)(4020KiB/1001msec) 00:11:16.087 slat (nsec): min=19449, max=93154, avg=44230.92, stdev=12901.63 00:11:16.087 clat (usec): min=263, max=1068, avg=569.66, stdev=157.47 00:11:16.087 lat (usec): min=285, max=1103, avg=613.89, stdev=161.31 00:11:16.087 clat percentiles (usec): 00:11:16.087 | 1.00th=[ 379], 5.00th=[ 404], 10.00th=[ 416], 20.00th=[ 437], 00:11:16.087 | 30.00th=[ 453], 40.00th=[ 465], 50.00th=[ 486], 60.00th=[ 545], 00:11:16.087 | 70.00th=[ 693], 80.00th=[ 742], 90.00th=[ 807], 95.00th=[ 848], 00:11:16.087 | 99.00th=[ 898], 99.50th=[ 906], 99.90th=[ 963], 99.95th=[ 1074], 00:11:16.087 | 99.99th=[ 1074] 00:11:16.087 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:16.087 slat (usec): min=25, max=108, avg=42.46, stdev= 9.81 00:11:16.087 clat (usec): min=149, max=1221, avg=321.61, stdev=96.96 00:11:16.087 lat (usec): min=180, max=1275, avg=364.07, stdev=99.07 00:11:16.087 clat percentiles (usec): 00:11:16.088 | 1.00th=[ 163], 5.00th=[ 186], 10.00th=[ 206], 20.00th=[ 237], 00:11:16.088 | 30.00th=[ 265], 40.00th=[ 289], 50.00th=[ 314], 60.00th=[ 347], 00:11:16.088 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 429], 95.00th=[ 449], 00:11:16.088 | 99.00th=[ 594], 99.50th=[ 644], 99.90th=[ 938], 99.95th=[ 1221], 00:11:16.088 | 99.99th=[ 1221] 00:11:16.088 bw ( KiB/s): min= 4248, max= 4248, per=17.25%, avg=4248.00, stdev= 0.00, samples=1 00:11:16.088 iops : min= 1062, max= 1062, avg=1062.00, stdev= 0.00, samples=1 00:11:16.088 lat (usec) : 250=12.12%, 500=63.82%, 750=14.24%, 1000=9.71% 00:11:16.088 lat (msec) : 2=0.10% 00:11:16.088 cpu : usr=1.80%, sys=7.40%, ctx=2030, majf=0, minf=11 00:11:16.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.088 issued rwts: total=1005,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.088 00:11:16.088 Run status group 0 (all jobs): 00:11:16.088 READ: bw=20.6MiB/s (21.6MB/s), 4016KiB/s-8184KiB/s (4112kB/s-8380kB/s), io=20.6MiB (21.6MB), run=1001-1001msec 00:11:16.088 WRITE: bw=24.0MiB/s (25.2MB/s), 4092KiB/s-9682KiB/s (4190kB/s-9915kB/s), io=24.1MiB (25.2MB), run=1001-1001msec 00:11:16.088 00:11:16.088 Disk stats (read/write): 00:11:16.088 nvme0n1: ios=1074/1373, merge=0/0, ticks=474/303, in_queue=777, util=86.87% 00:11:16.088 nvme0n2: ios=908/1024, merge=0/0, ticks=429/368, in_queue=797, util=88.96% 00:11:16.088 nvme0n3: ios=1791/2048, merge=0/0, ticks=434/379, in_queue=813, util=89.14% 00:11:16.088 nvme0n4: ios=784/1024, merge=0/0, ticks=423/346, in_queue=769, util=89.70% 00:11:16.088 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:16.088 [global] 00:11:16.088 thread=1 00:11:16.088 invalidate=1 00:11:16.088 rw=randwrite 00:11:16.088 time_based=1 00:11:16.088 runtime=1 00:11:16.088 ioengine=libaio 00:11:16.088 direct=1 00:11:16.088 bs=4096 00:11:16.088 iodepth=1 00:11:16.088 norandommap=0 00:11:16.088 numjobs=1 00:11:16.088 00:11:16.088 verify_dump=1 00:11:16.088 verify_backlog=512 00:11:16.088 verify_state_save=0 00:11:16.088 do_verify=1 00:11:16.088 verify=crc32c-intel 00:11:16.088 [job0] 00:11:16.088 filename=/dev/nvme0n1 00:11:16.088 [job1] 00:11:16.088 filename=/dev/nvme0n2 00:11:16.088 [job2] 00:11:16.088 filename=/dev/nvme0n3 00:11:16.088 [job3] 00:11:16.088 filename=/dev/nvme0n4 00:11:16.088 Could not set queue depth (nvme0n1) 00:11:16.088 Could not set queue depth (nvme0n2) 00:11:16.088 Could not set queue depth (nvme0n3) 00:11:16.088 Could not set queue depth (nvme0n4) 00:11:16.088 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.088 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.088 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.088 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.088 fio-3.35 00:11:16.088 Starting 4 threads 00:11:17.463 00:11:17.463 job0: (groupid=0, jobs=1): err= 0: pid=66430: Tue Nov 19 09:39:04 2024 00:11:17.463 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:17.463 slat (nsec): min=12869, max=66009, avg=17279.60, stdev=3982.34 00:11:17.463 clat (usec): min=155, max=2487, avg=237.35, stdev=61.17 00:11:17.463 lat (usec): min=169, max=2518, avg=254.63, stdev=61.92 00:11:17.463 clat percentiles (usec): 00:11:17.463 | 1.00th=[ 174], 5.00th=[ 190], 10.00th=[ 200], 20.00th=[ 212], 00:11:17.463 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 241], 00:11:17.463 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 285], 00:11:17.463 | 99.00th=[ 318], 99.50th=[ 347], 99.90th=[ 685], 99.95th=[ 791], 00:11:17.463 | 99.99th=[ 2474] 00:11:17.463 write: IOPS=2222, BW=8891KiB/s (9104kB/s)(8900KiB/1001msec); 0 zone resets 00:11:17.463 slat (usec): min=15, max=123, avg=25.92, stdev= 6.80 00:11:17.463 clat (usec): min=99, max=643, avg=185.11, stdev=31.66 00:11:17.463 lat (usec): min=120, max=669, avg=211.03, stdev=32.97 00:11:17.463 clat percentiles (usec): 00:11:17.463 | 1.00th=[ 119], 5.00th=[ 137], 10.00th=[ 149], 20.00th=[ 163], 00:11:17.463 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:11:17.463 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 221], 95.00th=[ 233], 00:11:17.463 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 383], 99.95th=[ 408], 00:11:17.463 | 99.99th=[ 644] 00:11:17.463 bw ( KiB/s): min= 8592, max= 8592, per=25.68%, avg=8592.00, stdev= 0.00, samples=1 00:11:17.463 iops : min= 2148, max= 2148, avg=2148.00, stdev= 0.00, samples=1 00:11:17.463 lat (usec) : 100=0.02%, 250=84.46%, 500=15.38%, 750=0.09%, 1000=0.02% 00:11:17.463 lat (msec) : 4=0.02% 00:11:17.463 cpu : usr=2.00%, sys=7.10%, ctx=4273, majf=0, minf=11 00:11:17.463 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.464 issued rwts: total=2048,2225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.464 job1: (groupid=0, jobs=1): err= 0: pid=66431: Tue Nov 19 09:39:04 2024 00:11:17.464 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:17.464 slat (nsec): min=11299, max=47811, avg=17323.00, stdev=4576.53 00:11:17.464 clat (usec): min=149, max=2697, avg=243.84, stdev=66.80 00:11:17.464 lat (usec): min=163, max=2729, avg=261.16, stdev=67.64 00:11:17.464 clat percentiles (usec): 00:11:17.464 | 1.00th=[ 176], 5.00th=[ 192], 10.00th=[ 202], 20.00th=[ 212], 00:11:17.464 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 247], 00:11:17.464 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 306], 00:11:17.464 | 99.00th=[ 359], 99.50th=[ 441], 99.90th=[ 611], 99.95th=[ 627], 00:11:17.464 | 99.99th=[ 2704] 00:11:17.464 write: IOPS=2049, BW=8200KiB/s (8397kB/s)(8208KiB/1001msec); 0 zone resets 00:11:17.464 slat (usec): min=20, max=118, avg=29.56, stdev= 8.05 00:11:17.464 clat (usec): min=109, max=821, avg=192.23, stdev=34.80 00:11:17.464 lat (usec): min=133, max=843, avg=221.79, stdev=36.72 00:11:17.464 clat percentiles (usec): 00:11:17.464 | 1.00th=[ 127], 5.00th=[ 147], 10.00th=[ 157], 20.00th=[ 167], 00:11:17.464 | 30.00th=[ 174], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 196], 00:11:17.464 | 70.00th=[ 204], 80.00th=[ 217], 90.00th=[ 235], 95.00th=[ 251], 00:11:17.464 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 318], 99.95th=[ 408], 00:11:17.464 | 99.99th=[ 824] 00:11:17.464 bw ( KiB/s): min= 8192, max= 8192, per=24.48%, avg=8192.00, stdev= 0.00, samples=1 00:11:17.464 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:17.464 lat (usec) : 250=79.00%, 500=20.90%, 750=0.05%, 1000=0.02% 00:11:17.464 lat (msec) : 4=0.02% 00:11:17.464 cpu : usr=2.20%, sys=7.60%, ctx=4105, majf=0, minf=10 00:11:17.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.464 issued rwts: total=2048,2052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.464 job2: (groupid=0, jobs=1): err= 0: pid=66432: Tue Nov 19 09:39:04 2024 00:11:17.464 read: IOPS=1981, BW=7924KiB/s (8114kB/s)(7932KiB/1001msec) 00:11:17.464 slat (usec): min=13, max=112, avg=20.58, stdev= 6.69 00:11:17.464 clat (usec): min=159, max=540, avg=246.69, stdev=30.13 00:11:17.464 lat (usec): min=175, max=560, avg=267.27, stdev=31.20 00:11:17.464 clat percentiles (usec): 00:11:17.464 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 212], 20.00th=[ 223], 00:11:17.464 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:11:17.464 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 297], 00:11:17.464 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 502], 99.95th=[ 537], 00:11:17.464 | 99.99th=[ 537] 00:11:17.464 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:17.464 slat (usec): min=19, max=122, avg=29.08, stdev= 9.55 00:11:17.464 clat (usec): min=120, max=537, avg=195.51, stdev=28.85 00:11:17.464 lat (usec): min=147, max=575, avg=224.59, stdev=31.12 00:11:17.464 clat percentiles (usec): 00:11:17.464 | 1.00th=[ 139], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 174], 00:11:17.464 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 200], 00:11:17.464 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 245], 00:11:17.464 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 334], 99.95th=[ 441], 00:11:17.464 | 99.99th=[ 537] 00:11:17.464 bw ( KiB/s): min= 8192, max= 8192, per=24.48%, avg=8192.00, stdev= 0.00, samples=1 00:11:17.464 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:17.464 lat (usec) : 250=77.00%, 500=22.92%, 750=0.07% 00:11:17.464 cpu : usr=2.40%, sys=7.80%, ctx=4032, majf=0, minf=15 00:11:17.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.464 issued rwts: total=1983,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.464 job3: (groupid=0, jobs=1): err= 0: pid=66433: Tue Nov 19 09:39:04 2024 00:11:17.464 read: IOPS=1874, BW=7497KiB/s (7676kB/s)(7504KiB/1001msec) 00:11:17.464 slat (nsec): min=11964, max=69723, avg=19999.36, stdev=5292.71 00:11:17.464 clat (usec): min=173, max=3741, avg=255.00, stdev=92.54 00:11:17.464 lat (usec): min=186, max=3774, avg=275.00, stdev=92.83 00:11:17.464 clat percentiles (usec): 00:11:17.464 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 223], 00:11:17.464 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 260], 00:11:17.464 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 314], 00:11:17.464 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 1074], 99.95th=[ 3752], 00:11:17.464 | 99.99th=[ 3752] 00:11:17.464 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:17.464 slat (nsec): min=17429, max=92037, avg=29105.91, stdev=8416.22 00:11:17.464 clat (usec): min=117, max=2228, avg=202.52, stdev=56.33 00:11:17.464 lat (usec): min=139, max=2251, avg=231.63, stdev=57.37 00:11:17.464 clat percentiles (usec): 00:11:17.464 | 1.00th=[ 137], 5.00th=[ 151], 10.00th=[ 161], 20.00th=[ 176], 00:11:17.464 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 206], 00:11:17.464 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 245], 95.00th=[ 265], 00:11:17.464 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 433], 99.95th=[ 510], 00:11:17.464 | 99.99th=[ 2245] 00:11:17.464 bw ( KiB/s): min= 8192, max= 8192, per=24.48%, avg=8192.00, stdev= 0.00, samples=1 00:11:17.464 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:17.464 lat (usec) : 250=71.79%, 500=28.03%, 750=0.05%, 1000=0.03% 00:11:17.464 lat (msec) : 2=0.05%, 4=0.05% 00:11:17.464 cpu : usr=2.80%, sys=7.20%, ctx=3924, majf=0, minf=10 00:11:17.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.464 issued rwts: total=1876,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.464 00:11:17.464 Run status group 0 (all jobs): 00:11:17.464 READ: bw=31.0MiB/s (32.6MB/s), 7497KiB/s-8184KiB/s (7676kB/s-8380kB/s), io=31.1MiB (32.6MB), run=1001-1001msec 00:11:17.464 WRITE: bw=32.7MiB/s (34.3MB/s), 8184KiB/s-8891KiB/s (8380kB/s-9104kB/s), io=32.7MiB (34.3MB), run=1001-1001msec 00:11:17.464 00:11:17.464 Disk stats (read/write): 00:11:17.464 nvme0n1: ios=1718/2048, merge=0/0, ticks=428/406, in_queue=834, util=88.16% 00:11:17.464 nvme0n2: ios=1563/2048, merge=0/0, ticks=401/411, in_queue=812, util=88.32% 00:11:17.464 nvme0n3: ios=1536/1956, merge=0/0, ticks=390/394, in_queue=784, util=89.14% 00:11:17.464 nvme0n4: ios=1536/1845, merge=0/0, ticks=386/391, in_queue=777, util=89.38% 00:11:17.464 09:39:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:17.464 [global] 00:11:17.464 thread=1 00:11:17.464 invalidate=1 00:11:17.464 rw=write 00:11:17.464 time_based=1 00:11:17.464 runtime=1 00:11:17.464 ioengine=libaio 00:11:17.464 direct=1 00:11:17.464 bs=4096 00:11:17.464 iodepth=128 00:11:17.464 norandommap=0 00:11:17.464 numjobs=1 00:11:17.464 00:11:17.464 verify_dump=1 00:11:17.464 verify_backlog=512 00:11:17.464 verify_state_save=0 00:11:17.464 do_verify=1 00:11:17.464 verify=crc32c-intel 00:11:17.464 [job0] 00:11:17.464 filename=/dev/nvme0n1 00:11:17.464 [job1] 00:11:17.464 filename=/dev/nvme0n2 00:11:17.464 [job2] 00:11:17.464 filename=/dev/nvme0n3 00:11:17.464 [job3] 00:11:17.464 filename=/dev/nvme0n4 00:11:17.464 Could not set queue depth (nvme0n1) 00:11:17.464 Could not set queue depth (nvme0n2) 00:11:17.464 Could not set queue depth (nvme0n3) 00:11:17.464 Could not set queue depth (nvme0n4) 00:11:17.464 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:17.464 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:17.464 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:17.464 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:17.464 fio-3.35 00:11:17.464 Starting 4 threads 00:11:18.894 00:11:18.894 job0: (groupid=0, jobs=1): err= 0: pid=66495: Tue Nov 19 09:39:06 2024 00:11:18.894 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:11:18.894 slat (usec): min=6, max=12329, avg=148.75, stdev=1003.75 00:11:18.894 clat (usec): min=10427, max=41976, avg=20590.75, stdev=3719.21 00:11:18.894 lat (usec): min=10438, max=48843, avg=20739.50, stdev=3767.26 00:11:18.894 clat percentiles (usec): 00:11:18.894 | 1.00th=[11731], 5.00th=[17695], 10.00th=[18220], 20.00th=[18744], 00:11:18.894 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19792], 60.00th=[20317], 00:11:18.894 | 70.00th=[20579], 80.00th=[21365], 90.00th=[23725], 95.00th=[30540], 00:11:18.894 | 99.00th=[32637], 99.50th=[32900], 99.90th=[37487], 99.95th=[37487], 00:11:18.894 | 99.99th=[42206] 00:11:18.894 write: IOPS=3498, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1006msec); 0 zone resets 00:11:18.894 slat (usec): min=6, max=14653, avg=148.00, stdev=971.19 00:11:18.894 clat (usec): min=830, max=28924, avg=18250.54, stdev=2505.94 00:11:18.894 lat (usec): min=7444, max=29156, avg=18398.54, stdev=2360.23 00:11:18.894 clat percentiles (usec): 00:11:18.894 | 1.00th=[ 8455], 5.00th=[15270], 10.00th=[16581], 20.00th=[17171], 00:11:18.894 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18482], 00:11:18.894 | 70.00th=[19006], 80.00th=[19530], 90.00th=[20841], 95.00th=[21890], 00:11:18.894 | 99.00th=[25035], 99.50th=[25297], 99.90th=[28967], 99.95th=[28967], 00:11:18.894 | 99.99th=[28967] 00:11:18.894 bw ( KiB/s): min=13312, max=13816, per=26.13%, avg=13564.00, stdev=356.38, samples=2 00:11:18.894 iops : min= 3328, max= 3454, avg=3391.00, stdev=89.10, samples=2 00:11:18.894 lat (usec) : 1000=0.02% 00:11:18.894 lat (msec) : 10=1.09%, 20=68.71%, 50=30.18% 00:11:18.894 cpu : usr=3.18%, sys=8.76%, ctx=140, majf=0, minf=9 00:11:18.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:18.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.894 issued rwts: total=3072,3519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.894 job1: (groupid=0, jobs=1): err= 0: pid=66496: Tue Nov 19 09:39:06 2024 00:11:18.894 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:11:18.894 slat (usec): min=6, max=5629, avg=151.39, stdev=767.44 00:11:18.894 clat (usec): min=13836, max=23312, avg=19827.17, stdev=1465.81 00:11:18.894 lat (usec): min=17933, max=23320, avg=19978.55, stdev=1264.86 00:11:18.894 clat percentiles (usec): 00:11:18.894 | 1.00th=[15008], 5.00th=[17957], 10.00th=[18482], 20.00th=[18744], 00:11:18.894 | 30.00th=[19268], 40.00th=[19268], 50.00th=[19792], 60.00th=[20055], 00:11:18.894 | 70.00th=[20579], 80.00th=[21103], 90.00th=[21890], 95.00th=[22414], 00:11:18.894 | 99.00th=[22938], 99.50th=[23200], 99.90th=[23200], 99.95th=[23200], 00:11:18.894 | 99.99th=[23200] 00:11:18.894 write: IOPS=3382, BW=13.2MiB/s (13.9MB/s)(13.3MiB/1003msec); 0 zone resets 00:11:18.894 slat (usec): min=8, max=5282, avg=150.07, stdev=705.96 00:11:18.894 clat (usec): min=644, max=22724, avg=19298.18, stdev=2431.21 00:11:18.894 lat (usec): min=4433, max=22742, avg=19448.25, stdev=2331.58 00:11:18.894 clat percentiles (usec): 00:11:18.894 | 1.00th=[ 8586], 5.00th=[16188], 10.00th=[17695], 20.00th=[18220], 00:11:18.894 | 30.00th=[18482], 40.00th=[18744], 50.00th=[19268], 60.00th=[19792], 00:11:18.894 | 70.00th=[20317], 80.00th=[21103], 90.00th=[21890], 95.00th=[22152], 00:11:18.894 | 99.00th=[22414], 99.50th=[22414], 99.90th=[22676], 99.95th=[22676], 00:11:18.894 | 99.99th=[22676] 00:11:18.894 bw ( KiB/s): min=12800, max=13346, per=25.18%, avg=13073.00, stdev=386.08, samples=2 00:11:18.894 iops : min= 3200, max= 3336, avg=3268.00, stdev=96.17, samples=2 00:11:18.894 lat (usec) : 750=0.02% 00:11:18.894 lat (msec) : 10=0.99%, 20=61.18%, 50=37.82% 00:11:18.894 cpu : usr=3.39%, sys=8.88%, ctx=203, majf=0, minf=8 00:11:18.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:18.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.894 issued rwts: total=3072,3393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.894 job2: (groupid=0, jobs=1): err= 0: pid=66497: Tue Nov 19 09:39:06 2024 00:11:18.894 read: IOPS=2871, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1005msec) 00:11:18.894 slat (usec): min=6, max=19527, avg=171.24, stdev=1152.81 00:11:18.894 clat (usec): min=2955, max=46456, avg=22621.28, stdev=4982.01 00:11:18.894 lat (usec): min=7790, max=46470, avg=22792.52, stdev=5008.36 00:11:18.894 clat percentiles (usec): 00:11:18.894 | 1.00th=[11863], 5.00th=[13829], 10.00th=[19530], 20.00th=[20579], 00:11:18.894 | 30.00th=[21365], 40.00th=[21627], 50.00th=[22152], 60.00th=[22414], 00:11:18.894 | 70.00th=[23200], 80.00th=[24511], 90.00th=[25297], 95.00th=[33162], 00:11:18.894 | 99.00th=[43254], 99.50th=[44827], 99.90th=[46400], 99.95th=[46400], 00:11:18.894 | 99.99th=[46400] 00:11:18.894 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:11:18.894 slat (usec): min=6, max=15952, avg=157.13, stdev=984.04 00:11:18.894 clat (usec): min=4699, max=46418, avg=20193.72, stdev=3410.67 00:11:18.894 lat (usec): min=4721, max=46433, avg=20350.85, stdev=3313.57 00:11:18.894 clat percentiles (usec): 00:11:18.894 | 1.00th=[ 6849], 5.00th=[14484], 10.00th=[17695], 20.00th=[19006], 00:11:18.894 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20055], 60.00th=[20317], 00:11:18.894 | 70.00th=[20579], 80.00th=[21103], 90.00th=[25035], 95.00th=[25822], 00:11:18.894 | 99.00th=[27919], 99.50th=[27919], 99.90th=[28181], 99.95th=[44303], 00:11:18.894 | 99.99th=[46400] 00:11:18.894 bw ( KiB/s): min=12288, max=12312, per=23.69%, avg=12300.00, stdev=16.97, samples=2 00:11:18.894 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:11:18.894 lat (msec) : 4=0.02%, 10=1.33%, 20=29.12%, 50=69.54% 00:11:18.894 cpu : usr=3.09%, sys=8.17%, ctx=178, majf=0, minf=7 00:11:18.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:18.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.894 issued rwts: total=2886,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.894 job3: (groupid=0, jobs=1): err= 0: pid=66498: Tue Nov 19 09:39:06 2024 00:11:18.894 read: IOPS=2910, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1003msec) 00:11:18.894 slat (usec): min=6, max=7047, avg=167.78, stdev=698.35 00:11:18.894 clat (usec): min=1170, max=30113, avg=20922.72, stdev=2959.74 00:11:18.894 lat (usec): min=4010, max=30141, avg=21090.50, stdev=3010.00 00:11:18.894 clat percentiles (usec): 00:11:18.894 | 1.00th=[ 7504], 5.00th=[16909], 10.00th=[17695], 20.00th=[19792], 00:11:18.894 | 30.00th=[20317], 40.00th=[20841], 50.00th=[21103], 60.00th=[21627], 00:11:18.894 | 70.00th=[22152], 80.00th=[22414], 90.00th=[23725], 95.00th=[24773], 00:11:18.894 | 99.00th=[26870], 99.50th=[27919], 99.90th=[28967], 99.95th=[29754], 00:11:18.894 | 99.99th=[30016] 00:11:18.894 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:11:18.894 slat (usec): min=9, max=6612, avg=157.80, stdev=644.69 00:11:18.894 clat (usec): min=14927, max=28723, avg=21244.35, stdev=2070.36 00:11:18.894 lat (usec): min=14948, max=28765, avg=21402.15, stdev=2136.58 00:11:18.894 clat percentiles (usec): 00:11:18.894 | 1.00th=[17433], 5.00th=[17695], 10.00th=[17957], 20.00th=[20055], 00:11:18.894 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21365], 60.00th=[21365], 00:11:18.894 | 70.00th=[22152], 80.00th=[22676], 90.00th=[23462], 95.00th=[25035], 00:11:18.894 | 99.00th=[26870], 99.50th=[27132], 99.90th=[28443], 99.95th=[28705], 00:11:18.894 | 99.99th=[28705] 00:11:18.894 bw ( KiB/s): min=12288, max=12312, per=23.69%, avg=12300.00, stdev=16.97, samples=2 00:11:18.894 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:11:18.894 lat (msec) : 2=0.02%, 10=0.70%, 20=21.10%, 50=78.18% 00:11:18.894 cpu : usr=2.69%, sys=9.58%, ctx=387, majf=0, minf=7 00:11:18.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:18.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.894 issued rwts: total=2919,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.894 00:11:18.894 Run status group 0 (all jobs): 00:11:18.894 READ: bw=46.4MiB/s (48.7MB/s), 11.2MiB/s-12.0MiB/s (11.8MB/s-12.5MB/s), io=46.7MiB (48.9MB), run=1003-1006msec 00:11:18.894 WRITE: bw=50.7MiB/s (53.2MB/s), 11.9MiB/s-13.7MiB/s (12.5MB/s-14.3MB/s), io=51.0MiB (53.5MB), run=1003-1006msec 00:11:18.894 00:11:18.894 Disk stats (read/write): 00:11:18.894 nvme0n1: ios=2610/2944, merge=0/0, ticks=51168/51022, in_queue=102190, util=88.28% 00:11:18.894 nvme0n2: ios=2601/2912, merge=0/0, ticks=12238/12832, in_queue=25070, util=88.45% 00:11:18.894 nvme0n3: ios=2560/2647, merge=0/0, ticks=53651/48922, in_queue=102573, util=89.04% 00:11:18.894 nvme0n4: ios=2447/2560, merge=0/0, ticks=17327/16893, in_queue=34220, util=89.49% 00:11:18.894 09:39:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:18.894 [global] 00:11:18.894 thread=1 00:11:18.894 invalidate=1 00:11:18.894 rw=randwrite 00:11:18.894 time_based=1 00:11:18.894 runtime=1 00:11:18.894 ioengine=libaio 00:11:18.894 direct=1 00:11:18.894 bs=4096 00:11:18.894 iodepth=128 00:11:18.894 norandommap=0 00:11:18.894 numjobs=1 00:11:18.894 00:11:18.894 verify_dump=1 00:11:18.894 verify_backlog=512 00:11:18.894 verify_state_save=0 00:11:18.894 do_verify=1 00:11:18.894 verify=crc32c-intel 00:11:18.894 [job0] 00:11:18.894 filename=/dev/nvme0n1 00:11:18.894 [job1] 00:11:18.894 filename=/dev/nvme0n2 00:11:18.894 [job2] 00:11:18.894 filename=/dev/nvme0n3 00:11:18.894 [job3] 00:11:18.895 filename=/dev/nvme0n4 00:11:18.895 Could not set queue depth (nvme0n1) 00:11:18.895 Could not set queue depth (nvme0n2) 00:11:18.895 Could not set queue depth (nvme0n3) 00:11:18.895 Could not set queue depth (nvme0n4) 00:11:18.895 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:18.895 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:18.895 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:18.895 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:18.895 fio-3.35 00:11:18.895 Starting 4 threads 00:11:20.269 00:11:20.269 job0: (groupid=0, jobs=1): err= 0: pid=66551: Tue Nov 19 09:39:07 2024 00:11:20.269 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:11:20.269 slat (usec): min=6, max=21071, avg=217.74, stdev=1357.47 00:11:20.269 clat (usec): min=13547, max=49452, avg=28795.83, stdev=5411.11 00:11:20.269 lat (usec): min=13563, max=53006, avg=29013.57, stdev=5471.98 00:11:20.269 clat percentiles (usec): 00:11:20.269 | 1.00th=[18482], 5.00th=[19268], 10.00th=[20317], 20.00th=[23200], 00:11:20.269 | 30.00th=[27919], 40.00th=[28705], 50.00th=[29492], 60.00th=[29754], 00:11:20.269 | 70.00th=[30802], 80.00th=[33817], 90.00th=[34866], 95.00th=[36963], 00:11:20.269 | 99.00th=[42206], 99.50th=[43779], 99.90th=[45876], 99.95th=[49021], 00:11:20.269 | 99.99th=[49546] 00:11:20.269 write: IOPS=2420, BW=9681KiB/s (9914kB/s)(9720KiB/1004msec); 0 zone resets 00:11:20.269 slat (usec): min=9, max=25853, avg=220.92, stdev=1390.65 00:11:20.269 clat (usec): min=1181, max=86401, avg=28093.26, stdev=15089.04 00:11:20.269 lat (usec): min=7870, max=86412, avg=28314.18, stdev=15146.01 00:11:20.269 clat percentiles (usec): 00:11:20.269 | 1.00th=[ 8455], 5.00th=[13304], 10.00th=[14222], 20.00th=[15664], 00:11:20.269 | 30.00th=[16450], 40.00th=[20841], 50.00th=[27395], 60.00th=[30278], 00:11:20.269 | 70.00th=[31851], 80.00th=[33162], 90.00th=[46400], 95.00th=[58983], 00:11:20.269 | 99.00th=[82314], 99.50th=[83362], 99.90th=[86508], 99.95th=[86508], 00:11:20.269 | 99.99th=[86508] 00:11:20.269 bw ( KiB/s): min= 8264, max=10147, per=20.27%, avg=9205.50, stdev=1331.48, samples=2 00:11:20.269 iops : min= 2066, max= 2536, avg=2301.00, stdev=332.34, samples=2 00:11:20.269 lat (msec) : 2=0.02%, 10=1.59%, 20=22.51%, 50=70.77%, 100=5.11% 00:11:20.269 cpu : usr=2.19%, sys=6.38%, ctx=190, majf=0, minf=6 00:11:20.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:20.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.269 issued rwts: total=2048,2430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.269 job1: (groupid=0, jobs=1): err= 0: pid=66552: Tue Nov 19 09:39:07 2024 00:11:20.269 read: IOPS=2021, BW=8087KiB/s (8281kB/s)(8192KiB/1013msec) 00:11:20.269 slat (usec): min=7, max=16316, avg=237.12, stdev=1317.94 00:11:20.269 clat (usec): min=13807, max=81645, avg=28655.76, stdev=8930.80 00:11:20.269 lat (usec): min=13824, max=81718, avg=28892.88, stdev=9043.28 00:11:20.269 clat percentiles (usec): 00:11:20.269 | 1.00th=[15795], 5.00th=[18220], 10.00th=[20055], 20.00th=[20579], 00:11:20.269 | 30.00th=[23725], 40.00th=[27919], 50.00th=[28705], 60.00th=[29492], 00:11:20.269 | 70.00th=[30802], 80.00th=[33817], 90.00th=[34341], 95.00th=[39584], 00:11:20.269 | 99.00th=[66847], 99.50th=[73925], 99.90th=[81265], 99.95th=[81265], 00:11:20.269 | 99.99th=[81265] 00:11:20.269 write: IOPS=2115, BW=8462KiB/s (8665kB/s)(8572KiB/1013msec); 0 zone resets 00:11:20.269 slat (usec): min=13, max=20753, avg=231.76, stdev=1372.84 00:11:20.269 clat (msec): min=9, max=100, avg=32.30, stdev=16.08 00:11:20.269 lat (msec): min=11, max=100, avg=32.53, stdev=16.18 00:11:20.269 clat percentiles (msec): 00:11:20.269 | 1.00th=[ 13], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 18], 00:11:20.269 | 30.00th=[ 25], 40.00th=[ 29], 50.00th=[ 31], 60.00th=[ 33], 00:11:20.269 | 70.00th=[ 33], 80.00th=[ 41], 90.00th=[ 51], 95.00th=[ 70], 00:11:20.269 | 99.00th=[ 90], 99.50th=[ 92], 99.90th=[ 102], 99.95th=[ 102], 00:11:20.269 | 99.99th=[ 102] 00:11:20.269 bw ( KiB/s): min= 8151, max= 8216, per=18.02%, avg=8183.50, stdev=45.96, samples=2 00:11:20.269 iops : min= 2037, max= 2054, avg=2045.50, stdev=12.02, samples=2 00:11:20.269 lat (msec) : 10=0.02%, 20=17.08%, 50=76.00%, 100=6.73%, 250=0.17% 00:11:20.269 cpu : usr=2.17%, sys=6.72%, ctx=197, majf=0, minf=9 00:11:20.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:11:20.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.269 issued rwts: total=2048,2143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.270 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.270 job2: (groupid=0, jobs=1): err= 0: pid=66553: Tue Nov 19 09:39:07 2024 00:11:20.270 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:11:20.270 slat (usec): min=8, max=10326, avg=128.47, stdev=845.23 00:11:20.270 clat (usec): min=8702, max=29004, avg=17730.78, stdev=2349.16 00:11:20.270 lat (usec): min=8716, max=33934, avg=17859.26, stdev=2377.76 00:11:20.270 clat percentiles (usec): 00:11:20.270 | 1.00th=[10421], 5.00th=[14353], 10.00th=[15401], 20.00th=[16188], 00:11:20.270 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17957], 60.00th=[18220], 00:11:20.270 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19792], 95.00th=[20579], 00:11:20.270 | 99.00th=[27132], 99.50th=[28443], 99.90th=[28967], 99.95th=[28967], 00:11:20.270 | 99.99th=[28967] 00:11:20.270 write: IOPS=3759, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1004msec); 0 zone resets 00:11:20.270 slat (usec): min=10, max=17473, avg=134.63, stdev=865.09 00:11:20.270 clat (usec): min=521, max=30425, avg=16847.80, stdev=2980.00 00:11:20.270 lat (usec): min=7922, max=30449, avg=16982.42, stdev=2891.54 00:11:20.270 clat percentiles (usec): 00:11:20.270 | 1.00th=[ 8717], 5.00th=[12780], 10.00th=[14484], 20.00th=[15270], 00:11:20.270 | 30.00th=[15795], 40.00th=[16188], 50.00th=[16450], 60.00th=[16909], 00:11:20.270 | 70.00th=[17433], 80.00th=[18220], 90.00th=[20317], 95.00th=[21365], 00:11:20.270 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30278], 99.95th=[30540], 00:11:20.270 | 99.99th=[30540] 00:11:20.270 bw ( KiB/s): min=13349, max=15895, per=32.20%, avg=14622.00, stdev=1800.29, samples=2 00:11:20.270 iops : min= 3337, max= 3973, avg=3655.00, stdev=449.72, samples=2 00:11:20.270 lat (usec) : 750=0.01% 00:11:20.270 lat (msec) : 10=1.40%, 20=87.59%, 50=10.99% 00:11:20.270 cpu : usr=4.19%, sys=10.77%, ctx=158, majf=0, minf=5 00:11:20.270 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:20.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.270 issued rwts: total=3584,3775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.270 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.270 job3: (groupid=0, jobs=1): err= 0: pid=66554: Tue Nov 19 09:39:07 2024 00:11:20.270 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:11:20.270 slat (usec): min=7, max=10215, avg=155.25, stdev=1040.80 00:11:20.270 clat (usec): min=12151, max=33496, avg=21227.37, stdev=2322.48 00:11:20.270 lat (usec): min=12169, max=39601, avg=21382.62, stdev=2357.92 00:11:20.270 clat percentiles (usec): 00:11:20.270 | 1.00th=[13042], 5.00th=[18482], 10.00th=[19792], 20.00th=[20579], 00:11:20.270 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:11:20.270 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22676], 95.00th=[22938], 00:11:20.270 | 99.00th=[31327], 99.50th=[32900], 99.90th=[33424], 99.95th=[33424], 00:11:20.270 | 99.99th=[33424] 00:11:20.270 write: IOPS=3144, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1002msec); 0 zone resets 00:11:20.270 slat (usec): min=11, max=16043, avg=156.93, stdev=1023.70 00:11:20.270 clat (usec): min=1442, max=29275, avg=19573.73, stdev=2693.02 00:11:20.270 lat (usec): min=8517, max=29308, avg=19730.66, stdev=2532.68 00:11:20.270 clat percentiles (usec): 00:11:20.270 | 1.00th=[ 9503], 5.00th=[16712], 10.00th=[17695], 20.00th=[18220], 00:11:20.270 | 30.00th=[18744], 40.00th=[19268], 50.00th=[19792], 60.00th=[20317], 00:11:20.270 | 70.00th=[20579], 80.00th=[21103], 90.00th=[21365], 95.00th=[21627], 00:11:20.270 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29230], 99.95th=[29230], 00:11:20.270 | 99.99th=[29230] 00:11:20.270 bw ( KiB/s): min=12263, max=12312, per=27.06%, avg=12287.50, stdev=34.65, samples=2 00:11:20.270 iops : min= 3065, max= 3078, avg=3071.50, stdev= 9.19, samples=2 00:11:20.270 lat (msec) : 2=0.02%, 10=0.72%, 20=32.73%, 50=66.53% 00:11:20.270 cpu : usr=3.00%, sys=9.49%, ctx=127, majf=0, minf=4 00:11:20.270 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:20.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.270 issued rwts: total=3072,3151,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.270 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.270 00:11:20.270 Run status group 0 (all jobs): 00:11:20.270 READ: bw=41.5MiB/s (43.5MB/s), 8087KiB/s-13.9MiB/s (8281kB/s-14.6MB/s), io=42.0MiB (44.0MB), run=1002-1013msec 00:11:20.270 WRITE: bw=44.3MiB/s (46.5MB/s), 8462KiB/s-14.7MiB/s (8665kB/s-15.4MB/s), io=44.9MiB (47.1MB), run=1002-1013msec 00:11:20.270 00:11:20.270 Disk stats (read/write): 00:11:20.270 nvme0n1: ios=1586/2047, merge=0/0, ticks=42807/60361, in_queue=103168, util=87.47% 00:11:20.270 nvme0n2: ios=1585/2047, merge=0/0, ticks=20712/28803, in_queue=49515, util=88.65% 00:11:20.270 nvme0n3: ios=3064/3144, merge=0/0, ticks=51345/49432, in_queue=100777, util=88.87% 00:11:20.270 nvme0n4: ios=2560/2688, merge=0/0, ticks=52251/49894, in_queue=102145, util=89.52% 00:11:20.270 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:20.270 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66568 00:11:20.270 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:20.270 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:20.270 [global] 00:11:20.270 thread=1 00:11:20.270 invalidate=1 00:11:20.270 rw=read 00:11:20.270 time_based=1 00:11:20.270 runtime=10 00:11:20.270 ioengine=libaio 00:11:20.270 direct=1 00:11:20.270 bs=4096 00:11:20.270 iodepth=1 00:11:20.270 norandommap=1 00:11:20.270 numjobs=1 00:11:20.270 00:11:20.270 [job0] 00:11:20.270 filename=/dev/nvme0n1 00:11:20.270 [job1] 00:11:20.270 filename=/dev/nvme0n2 00:11:20.270 [job2] 00:11:20.270 filename=/dev/nvme0n3 00:11:20.270 [job3] 00:11:20.270 filename=/dev/nvme0n4 00:11:20.270 Could not set queue depth (nvme0n1) 00:11:20.270 Could not set queue depth (nvme0n2) 00:11:20.270 Could not set queue depth (nvme0n3) 00:11:20.270 Could not set queue depth (nvme0n4) 00:11:20.270 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.270 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.270 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.270 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.270 fio-3.35 00:11:20.270 Starting 4 threads 00:11:23.630 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:23.630 fio: pid=66611, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:23.630 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=36032512, buflen=4096 00:11:23.630 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:23.630 fio: pid=66610, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:23.630 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=58441728, buflen=4096 00:11:23.630 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.630 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:23.888 fio: pid=66608, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:23.888 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=44974080, buflen=4096 00:11:23.888 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.888 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:24.146 fio: pid=66609, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:24.146 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=58335232, buflen=4096 00:11:24.146 00:11:24.146 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66608: Tue Nov 19 09:39:11 2024 00:11:24.146 read: IOPS=3109, BW=12.1MiB/s (12.7MB/s)(42.9MiB/3532msec) 00:11:24.146 slat (usec): min=8, max=14238, avg=19.90, stdev=221.69 00:11:24.146 clat (usec): min=141, max=4059, avg=300.14, stdev=96.11 00:11:24.146 lat (usec): min=156, max=14531, avg=320.03, stdev=241.25 00:11:24.146 clat percentiles (usec): 00:11:24.146 | 1.00th=[ 180], 5.00th=[ 210], 10.00th=[ 233], 20.00th=[ 249], 00:11:24.146 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 293], 00:11:24.146 | 70.00th=[ 314], 80.00th=[ 338], 90.00th=[ 388], 95.00th=[ 453], 00:11:24.146 | 99.00th=[ 537], 99.50th=[ 586], 99.90th=[ 1139], 99.95th=[ 1844], 00:11:24.146 | 99.99th=[ 3097] 00:11:24.146 bw ( KiB/s): min= 9730, max=13376, per=23.82%, avg=12060.33, stdev=1653.13, samples=6 00:11:24.146 iops : min= 2432, max= 3344, avg=3015.00, stdev=413.42, samples=6 00:11:24.146 lat (usec) : 250=20.27%, 500=77.15%, 750=2.36%, 1000=0.06% 00:11:24.146 lat (msec) : 2=0.11%, 4=0.03%, 10=0.01% 00:11:24.146 cpu : usr=1.30%, sys=4.25%, ctx=10993, majf=0, minf=1 00:11:24.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.146 issued rwts: total=10981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.146 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66609: Tue Nov 19 09:39:11 2024 00:11:24.146 read: IOPS=3733, BW=14.6MiB/s (15.3MB/s)(55.6MiB/3815msec) 00:11:24.146 slat (usec): min=12, max=14371, avg=18.74, stdev=185.05 00:11:24.146 clat (usec): min=155, max=2234, avg=247.49, stdev=35.58 00:11:24.146 lat (usec): min=169, max=14655, avg=266.23, stdev=188.61 00:11:24.146 clat percentiles (usec): 00:11:24.146 | 1.00th=[ 178], 5.00th=[ 194], 10.00th=[ 212], 20.00th=[ 229], 00:11:24.146 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:11:24.146 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:11:24.146 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 537], 99.95th=[ 652], 00:11:24.146 | 99.99th=[ 1778] 00:11:24.146 bw ( KiB/s): min=14365, max=15127, per=29.10%, avg=14735.43, stdev=222.53, samples=7 00:11:24.146 iops : min= 3591, max= 3781, avg=3683.71, stdev=55.48, samples=7 00:11:24.146 lat (usec) : 250=51.61%, 500=48.28%, 750=0.09% 00:11:24.146 lat (msec) : 2=0.01%, 4=0.01% 00:11:24.146 cpu : usr=1.36%, sys=4.77%, ctx=14251, majf=0, minf=1 00:11:24.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.146 issued rwts: total=14243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.146 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66610: Tue Nov 19 09:39:11 2024 00:11:24.146 read: IOPS=4348, BW=17.0MiB/s (17.8MB/s)(55.7MiB/3281msec) 00:11:24.146 slat (usec): min=11, max=10360, avg=16.25, stdev=117.52 00:11:24.146 clat (usec): min=144, max=3602, avg=212.02, stdev=45.82 00:11:24.146 lat (usec): min=166, max=10841, avg=228.26, stdev=132.34 00:11:24.146 clat percentiles (usec): 00:11:24.146 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:11:24.146 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 215], 00:11:24.146 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 245], 95.00th=[ 258], 00:11:24.146 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 330], 99.95th=[ 570], 00:11:24.146 | 99.99th=[ 2540] 00:11:24.146 bw ( KiB/s): min=16793, max=18040, per=34.51%, avg=17474.83, stdev=504.47, samples=6 00:11:24.146 iops : min= 4198, max= 4510, avg=4368.67, stdev=126.19, samples=6 00:11:24.146 lat (usec) : 250=92.30%, 500=7.64%, 750=0.01%, 1000=0.01% 00:11:24.146 lat (msec) : 2=0.01%, 4=0.01% 00:11:24.146 cpu : usr=1.28%, sys=5.98%, ctx=14273, majf=0, minf=2 00:11:24.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.146 issued rwts: total=14269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.146 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66611: Tue Nov 19 09:39:11 2024 00:11:24.146 read: IOPS=3001, BW=11.7MiB/s (12.3MB/s)(34.4MiB/2931msec) 00:11:24.146 slat (nsec): min=8467, max=85701, avg=13549.12, stdev=5743.94 00:11:24.146 clat (usec): min=200, max=7638, avg=318.14, stdev=117.08 00:11:24.146 lat (usec): min=216, max=7717, avg=331.69, stdev=119.42 00:11:24.146 clat percentiles (usec): 00:11:24.146 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 269], 00:11:24.146 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 310], 00:11:24.146 | 70.00th=[ 326], 80.00th=[ 355], 90.00th=[ 404], 95.00th=[ 469], 00:11:24.146 | 99.00th=[ 537], 99.50th=[ 586], 99.90th=[ 1012], 99.95th=[ 1369], 00:11:24.146 | 99.99th=[ 7635] 00:11:24.146 bw ( KiB/s): min=10184, max=13376, per=24.57%, avg=12441.60, stdev=1310.10, samples=5 00:11:24.146 iops : min= 2546, max= 3344, avg=3110.40, stdev=327.53, samples=5 00:11:24.146 lat (usec) : 250=5.83%, 500=91.63%, 750=2.33%, 1000=0.08% 00:11:24.146 lat (msec) : 2=0.08%, 4=0.01%, 10=0.02% 00:11:24.146 cpu : usr=1.06%, sys=3.65%, ctx=8804, majf=0, minf=1 00:11:24.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.146 issued rwts: total=8798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.146 00:11:24.146 Run status group 0 (all jobs): 00:11:24.146 READ: bw=49.4MiB/s (51.8MB/s), 11.7MiB/s-17.0MiB/s (12.3MB/s-17.8MB/s), io=189MiB (198MB), run=2931-3815msec 00:11:24.146 00:11:24.146 Disk stats (read/write): 00:11:24.146 nvme0n1: ios=10334/0, merge=0/0, ticks=3110/0, in_queue=3110, util=95.25% 00:11:24.146 nvme0n2: ios=13339/0, merge=0/0, ticks=3398/0, in_queue=3398, util=95.66% 00:11:24.146 nvme0n3: ios=13618/0, merge=0/0, ticks=2937/0, in_queue=2937, util=96.34% 00:11:24.146 nvme0n4: ios=8656/0, merge=0/0, ticks=2602/0, in_queue=2602, util=96.53% 00:11:24.146 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:24.146 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:24.404 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:24.404 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:24.662 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:24.662 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:25.226 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:25.226 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:25.483 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:25.483 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:26.047 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:26.047 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66568 00:11:26.047 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:26.047 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.047 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:26.047 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:26.047 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:26.047 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.047 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.047 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:26.048 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:26.048 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:26.048 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:26.048 nvmf hotplug test: fio failed as expected 00:11:26.048 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.305 rmmod nvme_tcp 00:11:26.305 rmmod nvme_fabrics 00:11:26.305 rmmod nvme_keyring 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66181 ']' 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66181 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66181 ']' 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66181 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.305 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66181 00:11:26.562 killing process with pid 66181 00:11:26.562 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.562 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.562 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66181' 00:11:26.562 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66181 00:11:26.562 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66181 00:11:26.562 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:26.562 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:26.562 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:26.562 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:26.562 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:26.562 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:26.562 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:26.562 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.562 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:26.562 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:26.562 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:26.562 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:26.820 ************************************ 00:11:26.820 END TEST nvmf_fio_target 00:11:26.820 ************************************ 00:11:26.820 00:11:26.820 real 0m20.823s 00:11:26.820 user 1m19.741s 00:11:26.820 sys 0m9.281s 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:26.820 ************************************ 00:11:26.820 START TEST nvmf_bdevio 00:11:26.820 ************************************ 00:11:26.820 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:27.079 * Looking for test storage... 00:11:27.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.079 --rc genhtml_branch_coverage=1 00:11:27.079 --rc genhtml_function_coverage=1 00:11:27.079 --rc genhtml_legend=1 00:11:27.079 --rc geninfo_all_blocks=1 00:11:27.079 --rc geninfo_unexecuted_blocks=1 00:11:27.079 00:11:27.079 ' 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.079 --rc genhtml_branch_coverage=1 00:11:27.079 --rc genhtml_function_coverage=1 00:11:27.079 --rc genhtml_legend=1 00:11:27.079 --rc geninfo_all_blocks=1 00:11:27.079 --rc geninfo_unexecuted_blocks=1 00:11:27.079 00:11:27.079 ' 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.079 --rc genhtml_branch_coverage=1 00:11:27.079 --rc genhtml_function_coverage=1 00:11:27.079 --rc genhtml_legend=1 00:11:27.079 --rc geninfo_all_blocks=1 00:11:27.079 --rc geninfo_unexecuted_blocks=1 00:11:27.079 00:11:27.079 ' 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.079 --rc genhtml_branch_coverage=1 00:11:27.079 --rc genhtml_function_coverage=1 00:11:27.079 --rc genhtml_legend=1 00:11:27.079 --rc geninfo_all_blocks=1 00:11:27.079 --rc geninfo_unexecuted_blocks=1 00:11:27.079 00:11:27.079 ' 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:27.079 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.080 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:27.080 Cannot find device "nvmf_init_br" 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:27.080 Cannot find device "nvmf_init_br2" 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:27.080 Cannot find device "nvmf_tgt_br" 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.080 Cannot find device "nvmf_tgt_br2" 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:27.080 Cannot find device "nvmf_init_br" 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:27.080 Cannot find device "nvmf_init_br2" 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:27.080 Cannot find device "nvmf_tgt_br" 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:27.080 Cannot find device "nvmf_tgt_br2" 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:27.338 Cannot find device "nvmf_br" 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:27.338 Cannot find device "nvmf_init_if" 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:27.338 Cannot find device "nvmf_init_if2" 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:27.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:27.338 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:27.339 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:27.339 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.128 ms 00:11:27.339 00:11:27.339 --- 10.0.0.3 ping statistics --- 00:11:27.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.339 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:27.339 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:27.339 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:11:27.339 00:11:27.339 --- 10.0.0.4 ping statistics --- 00:11:27.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.339 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:27.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:11:27.339 00:11:27.339 --- 10.0.0.1 ping statistics --- 00:11:27.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.339 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:27.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:11:27.339 00:11:27.339 --- 10.0.0.2 ping statistics --- 00:11:27.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.339 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:27.339 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:27.596 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:27.596 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:27.596 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.596 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.596 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66945 00:11:27.596 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:27.596 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66945 00:11:27.596 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66945 ']' 00:11:27.596 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.596 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.596 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.596 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.596 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.596 [2024-11-19 09:39:15.040513] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:11:27.596 [2024-11-19 09:39:15.040605] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.596 [2024-11-19 09:39:15.186595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.854 [2024-11-19 09:39:15.274164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.854 [2024-11-19 09:39:15.274272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.855 [2024-11-19 09:39:15.274288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.855 [2024-11-19 09:39:15.274300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.855 [2024-11-19 09:39:15.274312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.855 [2024-11-19 09:39:15.276559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:27.855 [2024-11-19 09:39:15.276654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:27.855 [2024-11-19 09:39:15.276710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:27.855 [2024-11-19 09:39:15.276718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.855 [2024-11-19 09:39:15.354130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:27.855 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.855 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:27.855 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:27.855 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:27.855 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.855 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.855 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.855 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.855 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.855 [2024-11-19 09:39:15.466847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.855 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.855 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:27.855 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.855 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.113 Malloc0 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.113 [2024-11-19 09:39:15.542546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:28.113 { 00:11:28.113 "params": { 00:11:28.113 "name": "Nvme$subsystem", 00:11:28.113 "trtype": "$TEST_TRANSPORT", 00:11:28.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:28.113 "adrfam": "ipv4", 00:11:28.113 "trsvcid": "$NVMF_PORT", 00:11:28.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:28.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:28.113 "hdgst": ${hdgst:-false}, 00:11:28.113 "ddgst": ${ddgst:-false} 00:11:28.113 }, 00:11:28.113 "method": "bdev_nvme_attach_controller" 00:11:28.113 } 00:11:28.113 EOF 00:11:28.113 )") 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:28.113 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:28.113 "params": { 00:11:28.113 "name": "Nvme1", 00:11:28.113 "trtype": "tcp", 00:11:28.113 "traddr": "10.0.0.3", 00:11:28.113 "adrfam": "ipv4", 00:11:28.113 "trsvcid": "4420", 00:11:28.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:28.113 "hdgst": false, 00:11:28.113 "ddgst": false 00:11:28.113 }, 00:11:28.113 "method": "bdev_nvme_attach_controller" 00:11:28.113 }' 00:11:28.113 [2024-11-19 09:39:15.598270] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:11:28.113 [2024-11-19 09:39:15.598371] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66974 ] 00:11:28.372 [2024-11-19 09:39:15.740697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:28.372 [2024-11-19 09:39:15.801953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.372 [2024-11-19 09:39:15.802097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.372 [2024-11-19 09:39:15.802087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.372 [2024-11-19 09:39:15.866661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:28.372 I/O targets: 00:11:28.372 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:28.372 00:11:28.372 00:11:28.372 CUnit - A unit testing framework for C - Version 2.1-3 00:11:28.372 http://cunit.sourceforge.net/ 00:11:28.372 00:11:28.372 00:11:28.372 Suite: bdevio tests on: Nvme1n1 00:11:28.630 Test: blockdev write read block ...passed 00:11:28.630 Test: blockdev write zeroes read block ...passed 00:11:28.630 Test: blockdev write zeroes read no split ...passed 00:11:28.630 Test: blockdev write zeroes read split ...passed 00:11:28.630 Test: blockdev write zeroes read split partial ...passed 00:11:28.631 Test: blockdev reset ...[2024-11-19 09:39:16.021346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:28.631 [2024-11-19 09:39:16.021477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f64180 (9): Bad file descriptor 00:11:28.631 passed 00:11:28.631 Test: blockdev write read 8 blocks ...[2024-11-19 09:39:16.035104] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:28.631 passed 00:11:28.631 Test: blockdev write read size > 128k ...passed 00:11:28.631 Test: blockdev write read invalid size ...passed 00:11:28.631 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:28.631 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:28.631 Test: blockdev write read max offset ...passed 00:11:28.631 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:28.631 Test: blockdev writev readv 8 blocks ...passed 00:11:28.631 Test: blockdev writev readv 30 x 1block ...passed 00:11:28.631 Test: blockdev writev readv block ...passed 00:11:28.631 Test: blockdev writev readv size > 128k ...passed 00:11:28.631 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:28.631 Test: blockdev comparev and writev ...[2024-11-19 09:39:16.043465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.631 [2024-11-19 09:39:16.043522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:28.631 [2024-11-19 09:39:16.043545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.631 [2024-11-19 09:39:16.043558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:28.631 [2024-11-19 09:39:16.043892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.631 [2024-11-19 09:39:16.043918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:28.631 [2024-11-19 09:39:16.043937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.631 [2024-11-19 09:39:16.043948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:28.631 [2024-11-19 09:39:16.044308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.631 [2024-11-19 09:39:16.044333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:28.631 [2024-11-19 09:39:16.044352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.631 [2024-11-19 09:39:16.044363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:28.631 passed 00:11:28.631 Test: blockdev nvme passthru rw ...[2024-11-19 09:39:16.044689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.631 [2024-11-19 09:39:16.044712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:28.631 [2024-11-19 09:39:16.044731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.631 [2024-11-19 09:39:16.044741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:28.631 passed 00:11:28.631 Test: blockdev nvme passthru vendor specific ...passed 00:11:28.631 Test: blockdev nvme admin passthru ...[2024-11-19 09:39:16.045586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:28.631 [2024-11-19 09:39:16.045617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:28.631 [2024-11-19 09:39:16.045740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:28.631 [2024-11-19 09:39:16.045758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:28.631 [2024-11-19 09:39:16.045873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:28.631 [2024-11-19 09:39:16.045890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:28.631 [2024-11-19 09:39:16.046003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:28.631 [2024-11-19 09:39:16.046020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:28.631 passed 00:11:28.631 Test: blockdev copy ...passed 00:11:28.631 00:11:28.631 Run Summary: Type Total Ran Passed Failed Inactive 00:11:28.631 suites 1 1 n/a 0 0 00:11:28.631 tests 23 23 23 0 0 00:11:28.631 asserts 152 152 152 0 n/a 00:11:28.631 00:11:28.631 Elapsed time = 0.153 seconds 00:11:28.631 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.631 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.631 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.631 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.631 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:28.631 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:28.631 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:28.631 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.890 rmmod nvme_tcp 00:11:28.890 rmmod nvme_fabrics 00:11:28.890 rmmod nvme_keyring 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66945 ']' 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66945 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66945 ']' 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66945 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66945 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:28.890 killing process with pid 66945 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66945' 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66945 00:11:28.890 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66945 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:29.148 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:29.407 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.407 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.407 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:29.407 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.407 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.407 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.407 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:29.407 00:11:29.407 real 0m2.443s 00:11:29.407 user 0m6.654s 00:11:29.407 sys 0m0.863s 00:11:29.407 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.407 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:29.407 ************************************ 00:11:29.407 END TEST nvmf_bdevio 00:11:29.407 ************************************ 00:11:29.407 09:39:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:29.407 00:11:29.407 real 2m34.436s 00:11:29.407 user 6m44.814s 00:11:29.407 sys 0m51.541s 00:11:29.407 09:39:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.407 09:39:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:29.407 ************************************ 00:11:29.407 END TEST nvmf_target_core 00:11:29.407 ************************************ 00:11:29.407 09:39:16 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:29.407 09:39:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.407 09:39:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.407 09:39:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:29.407 ************************************ 00:11:29.407 START TEST nvmf_target_extra 00:11:29.407 ************************************ 00:11:29.407 09:39:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:29.407 * Looking for test storage... 00:11:29.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:29.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.667 --rc genhtml_branch_coverage=1 00:11:29.667 --rc genhtml_function_coverage=1 00:11:29.667 --rc genhtml_legend=1 00:11:29.667 --rc geninfo_all_blocks=1 00:11:29.667 --rc geninfo_unexecuted_blocks=1 00:11:29.667 00:11:29.667 ' 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:29.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.667 --rc genhtml_branch_coverage=1 00:11:29.667 --rc genhtml_function_coverage=1 00:11:29.667 --rc genhtml_legend=1 00:11:29.667 --rc geninfo_all_blocks=1 00:11:29.667 --rc geninfo_unexecuted_blocks=1 00:11:29.667 00:11:29.667 ' 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:29.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.667 --rc genhtml_branch_coverage=1 00:11:29.667 --rc genhtml_function_coverage=1 00:11:29.667 --rc genhtml_legend=1 00:11:29.667 --rc geninfo_all_blocks=1 00:11:29.667 --rc geninfo_unexecuted_blocks=1 00:11:29.667 00:11:29.667 ' 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:29.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.667 --rc genhtml_branch_coverage=1 00:11:29.667 --rc genhtml_function_coverage=1 00:11:29.667 --rc genhtml_legend=1 00:11:29.667 --rc geninfo_all_blocks=1 00:11:29.667 --rc geninfo_unexecuted_blocks=1 00:11:29.667 00:11:29.667 ' 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:29.667 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.668 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.668 ************************************ 00:11:29.668 START TEST nvmf_auth_target 00:11:29.668 ************************************ 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:29.668 * Looking for test storage... 00:11:29.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:29.668 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.927 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:29.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.928 --rc genhtml_branch_coverage=1 00:11:29.928 --rc genhtml_function_coverage=1 00:11:29.928 --rc genhtml_legend=1 00:11:29.928 --rc geninfo_all_blocks=1 00:11:29.928 --rc geninfo_unexecuted_blocks=1 00:11:29.928 00:11:29.928 ' 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:29.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.928 --rc genhtml_branch_coverage=1 00:11:29.928 --rc genhtml_function_coverage=1 00:11:29.928 --rc genhtml_legend=1 00:11:29.928 --rc geninfo_all_blocks=1 00:11:29.928 --rc geninfo_unexecuted_blocks=1 00:11:29.928 00:11:29.928 ' 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:29.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.928 --rc genhtml_branch_coverage=1 00:11:29.928 --rc genhtml_function_coverage=1 00:11:29.928 --rc genhtml_legend=1 00:11:29.928 --rc geninfo_all_blocks=1 00:11:29.928 --rc geninfo_unexecuted_blocks=1 00:11:29.928 00:11:29.928 ' 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:29.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.928 --rc genhtml_branch_coverage=1 00:11:29.928 --rc genhtml_function_coverage=1 00:11:29.928 --rc genhtml_legend=1 00:11:29.928 --rc geninfo_all_blocks=1 00:11:29.928 --rc geninfo_unexecuted_blocks=1 00:11:29.928 00:11:29.928 ' 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.928 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.928 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:29.929 Cannot find device "nvmf_init_br" 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:29.929 Cannot find device "nvmf_init_br2" 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:29.929 Cannot find device "nvmf_tgt_br" 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.929 Cannot find device "nvmf_tgt_br2" 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:29.929 Cannot find device "nvmf_init_br" 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:29.929 Cannot find device "nvmf_init_br2" 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:29.929 Cannot find device "nvmf_tgt_br" 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:29.929 Cannot find device "nvmf_tgt_br2" 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:29.929 Cannot find device "nvmf_br" 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:29.929 Cannot find device "nvmf_init_if" 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:29.929 Cannot find device "nvmf_init_if2" 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.929 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.929 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:29.929 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:30.242 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:30.242 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:11:30.242 00:11:30.242 --- 10.0.0.3 ping statistics --- 00:11:30.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.242 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:30.242 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:30.242 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:11:30.242 00:11:30.242 --- 10.0.0.4 ping statistics --- 00:11:30.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.242 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:30.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:11:30.242 00:11:30.242 --- 10.0.0.1 ping statistics --- 00:11:30.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.242 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:30.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:11:30.242 00:11:30.242 --- 10.0.0.2 ping statistics --- 00:11:30.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.242 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67261 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67261 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67261 ']' 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.242 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67287 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e1d042e6a6ebf2e2f0d80fdce833deeb4153fb891dc9ea86 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.frB 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e1d042e6a6ebf2e2f0d80fdce833deeb4153fb891dc9ea86 0 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e1d042e6a6ebf2e2f0d80fdce833deeb4153fb891dc9ea86 0 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e1d042e6a6ebf2e2f0d80fdce833deeb4153fb891dc9ea86 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.frB 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.frB 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.frB 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2e88b5e3990dc758886b18670d91c7940958e384e8cf301bd1efa43146e79476 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xBS 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2e88b5e3990dc758886b18670d91c7940958e384e8cf301bd1efa43146e79476 3 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2e88b5e3990dc758886b18670d91c7940958e384e8cf301bd1efa43146e79476 3 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2e88b5e3990dc758886b18670d91c7940958e384e8cf301bd1efa43146e79476 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xBS 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xBS 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.xBS 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:30.825 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1359705100764b7223799ebd3b78dc31 00:11:30.826 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:30.826 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.t5J 00:11:30.826 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1359705100764b7223799ebd3b78dc31 1 00:11:30.826 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1359705100764b7223799ebd3b78dc31 1 00:11:30.826 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:30.826 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:30.826 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1359705100764b7223799ebd3b78dc31 00:11:30.826 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:30.826 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.t5J 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.t5J 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.t5J 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cf4f04183f1baf769ae8b03a41db97539f34a36e328d3056 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.MIh 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cf4f04183f1baf769ae8b03a41db97539f34a36e328d3056 2 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cf4f04183f1baf769ae8b03a41db97539f34a36e328d3056 2 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cf4f04183f1baf769ae8b03a41db97539f34a36e328d3056 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.MIh 00:11:31.085 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.MIh 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.MIh 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=52578f46597812105d284944c8d37b74fb095d2bf09f4417 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Ltk 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 52578f46597812105d284944c8d37b74fb095d2bf09f4417 2 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 52578f46597812105d284944c8d37b74fb095d2bf09f4417 2 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=52578f46597812105d284944c8d37b74fb095d2bf09f4417 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Ltk 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Ltk 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Ltk 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=da28890373517d6740e69e2fb78594f4 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.EpZ 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key da28890373517d6740e69e2fb78594f4 1 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 da28890373517d6740e69e2fb78594f4 1 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=da28890373517d6740e69e2fb78594f4 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.EpZ 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.EpZ 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.EpZ 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=38ec2175098f7e3938e0da0522036bfc06fb01b897392f5191dd679753e66044 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ElQ 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 38ec2175098f7e3938e0da0522036bfc06fb01b897392f5191dd679753e66044 3 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 38ec2175098f7e3938e0da0522036bfc06fb01b897392f5191dd679753e66044 3 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=38ec2175098f7e3938e0da0522036bfc06fb01b897392f5191dd679753e66044 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:31.086 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:31.345 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ElQ 00:11:31.345 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ElQ 00:11:31.345 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.ElQ 00:11:31.345 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:31.345 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67261 00:11:31.345 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67261 ']' 00:11:31.345 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.345 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.345 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.345 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.345 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.605 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.605 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:31.605 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67287 /var/tmp/host.sock 00:11:31.605 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67287 ']' 00:11:31.605 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:31.605 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:31.605 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:31.605 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.605 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.864 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.864 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:31.864 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:31.864 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.864 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.864 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.864 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:31.864 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.frB 00:11:31.864 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.864 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.864 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.864 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.frB 00:11:31.864 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.frB 00:11:32.122 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.xBS ]] 00:11:32.122 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xBS 00:11:32.122 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.122 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.122 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.122 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xBS 00:11:32.122 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xBS 00:11:32.381 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:32.381 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.t5J 00:11:32.381 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.381 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.381 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.381 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.t5J 00:11:32.381 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.t5J 00:11:32.639 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.MIh ]] 00:11:32.639 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MIh 00:11:32.639 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.639 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.639 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.639 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MIh 00:11:32.639 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MIh 00:11:32.898 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:32.898 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Ltk 00:11:32.898 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.898 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.898 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.898 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Ltk 00:11:32.898 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Ltk 00:11:33.156 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.EpZ ]] 00:11:33.156 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EpZ 00:11:33.156 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.156 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.156 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.156 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EpZ 00:11:33.156 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EpZ 00:11:33.415 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:33.415 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ElQ 00:11:33.415 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.415 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.415 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.415 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ElQ 00:11:33.415 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ElQ 00:11:33.673 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:33.673 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:33.673 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:33.673 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:33.673 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:33.673 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:33.932 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:33.932 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.932 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:33.932 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:33.932 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:33.932 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.932 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.932 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.932 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.932 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.932 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.932 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.932 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.190 00:11:34.468 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.468 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.468 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.726 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.726 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.726 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.726 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.726 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.726 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.726 { 00:11:34.726 "cntlid": 1, 00:11:34.726 "qid": 0, 00:11:34.726 "state": "enabled", 00:11:34.726 "thread": "nvmf_tgt_poll_group_000", 00:11:34.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:11:34.726 "listen_address": { 00:11:34.726 "trtype": "TCP", 00:11:34.726 "adrfam": "IPv4", 00:11:34.726 "traddr": "10.0.0.3", 00:11:34.726 "trsvcid": "4420" 00:11:34.726 }, 00:11:34.726 "peer_address": { 00:11:34.726 "trtype": "TCP", 00:11:34.726 "adrfam": "IPv4", 00:11:34.726 "traddr": "10.0.0.1", 00:11:34.726 "trsvcid": "52158" 00:11:34.726 }, 00:11:34.726 "auth": { 00:11:34.726 "state": "completed", 00:11:34.726 "digest": "sha256", 00:11:34.726 "dhgroup": "null" 00:11:34.726 } 00:11:34.726 } 00:11:34.726 ]' 00:11:34.726 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.726 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:34.726 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.726 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:34.726 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.726 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.726 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.726 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.985 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:11:34.985 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.250 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.250 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.508 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.508 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.508 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.508 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.508 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.508 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.508 { 00:11:40.509 "cntlid": 3, 00:11:40.509 "qid": 0, 00:11:40.509 "state": "enabled", 00:11:40.509 "thread": "nvmf_tgt_poll_group_000", 00:11:40.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:11:40.509 "listen_address": { 00:11:40.509 "trtype": "TCP", 00:11:40.509 "adrfam": "IPv4", 00:11:40.509 "traddr": "10.0.0.3", 00:11:40.509 "trsvcid": "4420" 00:11:40.509 }, 00:11:40.509 "peer_address": { 00:11:40.509 "trtype": "TCP", 00:11:40.509 "adrfam": "IPv4", 00:11:40.509 "traddr": "10.0.0.1", 00:11:40.509 "trsvcid": "33956" 00:11:40.509 }, 00:11:40.509 "auth": { 00:11:40.509 "state": "completed", 00:11:40.509 "digest": "sha256", 00:11:40.509 "dhgroup": "null" 00:11:40.509 } 00:11:40.509 } 00:11:40.509 ]' 00:11:40.509 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.509 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:40.509 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.768 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:40.768 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.768 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.768 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.768 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.026 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:11:41.026 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:11:41.593 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.593 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:41.593 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.593 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.593 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.593 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.593 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:41.593 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:42.158 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:42.158 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.158 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:42.158 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:42.158 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:42.158 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.158 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.158 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.158 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.158 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.158 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.158 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.158 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.415 00:11:42.415 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.415 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.415 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.672 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.672 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.672 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.672 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.672 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.672 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.672 { 00:11:42.672 "cntlid": 5, 00:11:42.672 "qid": 0, 00:11:42.672 "state": "enabled", 00:11:42.672 "thread": "nvmf_tgt_poll_group_000", 00:11:42.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:11:42.672 "listen_address": { 00:11:42.672 "trtype": "TCP", 00:11:42.672 "adrfam": "IPv4", 00:11:42.672 "traddr": "10.0.0.3", 00:11:42.672 "trsvcid": "4420" 00:11:42.672 }, 00:11:42.672 "peer_address": { 00:11:42.672 "trtype": "TCP", 00:11:42.672 "adrfam": "IPv4", 00:11:42.672 "traddr": "10.0.0.1", 00:11:42.672 "trsvcid": "33984" 00:11:42.672 }, 00:11:42.672 "auth": { 00:11:42.672 "state": "completed", 00:11:42.672 "digest": "sha256", 00:11:42.672 "dhgroup": "null" 00:11:42.672 } 00:11:42.672 } 00:11:42.672 ]' 00:11:42.672 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.929 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:42.929 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.929 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:42.929 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.929 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.929 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.929 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.187 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:11:43.187 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:11:43.753 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.012 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:44.012 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.012 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.012 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.012 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.012 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:44.012 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:44.270 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:44.270 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.270 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:44.270 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:44.270 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:44.270 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.270 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:11:44.270 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.270 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.270 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.270 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:44.270 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:44.270 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:44.528 00:11:44.528 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.528 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.528 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.785 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.785 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.785 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.785 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.785 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.785 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.785 { 00:11:44.785 "cntlid": 7, 00:11:44.785 "qid": 0, 00:11:44.785 "state": "enabled", 00:11:44.785 "thread": "nvmf_tgt_poll_group_000", 00:11:44.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:11:44.785 "listen_address": { 00:11:44.785 "trtype": "TCP", 00:11:44.785 "adrfam": "IPv4", 00:11:44.785 "traddr": "10.0.0.3", 00:11:44.785 "trsvcid": "4420" 00:11:44.785 }, 00:11:44.785 "peer_address": { 00:11:44.785 "trtype": "TCP", 00:11:44.785 "adrfam": "IPv4", 00:11:44.785 "traddr": "10.0.0.1", 00:11:44.785 "trsvcid": "34012" 00:11:44.785 }, 00:11:44.785 "auth": { 00:11:44.785 "state": "completed", 00:11:44.785 "digest": "sha256", 00:11:44.785 "dhgroup": "null" 00:11:44.785 } 00:11:44.785 } 00:11:44.785 ]' 00:11:44.785 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.786 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.786 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.043 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:45.043 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.043 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.043 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.043 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.301 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:11:45.301 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:11:45.885 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.885 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:45.885 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.885 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.885 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.885 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:45.885 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.885 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:45.885 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:46.144 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:46.144 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.144 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:46.144 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:46.144 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:46.144 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.144 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.144 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.144 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.144 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.144 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.144 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.144 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.707 00:11:46.707 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.707 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.707 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.965 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.965 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.965 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.965 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.965 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.965 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.965 { 00:11:46.965 "cntlid": 9, 00:11:46.965 "qid": 0, 00:11:46.965 "state": "enabled", 00:11:46.965 "thread": "nvmf_tgt_poll_group_000", 00:11:46.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:11:46.965 "listen_address": { 00:11:46.965 "trtype": "TCP", 00:11:46.965 "adrfam": "IPv4", 00:11:46.965 "traddr": "10.0.0.3", 00:11:46.965 "trsvcid": "4420" 00:11:46.965 }, 00:11:46.965 "peer_address": { 00:11:46.965 "trtype": "TCP", 00:11:46.965 "adrfam": "IPv4", 00:11:46.965 "traddr": "10.0.0.1", 00:11:46.965 "trsvcid": "60712" 00:11:46.965 }, 00:11:46.965 "auth": { 00:11:46.965 "state": "completed", 00:11:46.965 "digest": "sha256", 00:11:46.965 "dhgroup": "ffdhe2048" 00:11:46.965 } 00:11:46.965 } 00:11:46.965 ]' 00:11:46.965 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.965 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.965 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.965 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:46.965 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.965 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.965 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.966 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.530 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:11:47.530 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:11:48.095 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.095 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:48.095 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.095 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.095 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.095 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.095 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:48.095 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:48.353 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:48.353 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.353 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:48.353 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:48.353 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:48.353 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.353 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.353 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.353 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.353 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.353 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.353 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.353 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.918 00:11:48.918 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.918 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.918 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.238 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.238 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.238 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.238 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.238 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.238 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.238 { 00:11:49.238 "cntlid": 11, 00:11:49.238 "qid": 0, 00:11:49.238 "state": "enabled", 00:11:49.238 "thread": "nvmf_tgt_poll_group_000", 00:11:49.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:11:49.238 "listen_address": { 00:11:49.238 "trtype": "TCP", 00:11:49.238 "adrfam": "IPv4", 00:11:49.238 "traddr": "10.0.0.3", 00:11:49.238 "trsvcid": "4420" 00:11:49.238 }, 00:11:49.238 "peer_address": { 00:11:49.238 "trtype": "TCP", 00:11:49.238 "adrfam": "IPv4", 00:11:49.239 "traddr": "10.0.0.1", 00:11:49.239 "trsvcid": "60746" 00:11:49.239 }, 00:11:49.239 "auth": { 00:11:49.239 "state": "completed", 00:11:49.239 "digest": "sha256", 00:11:49.239 "dhgroup": "ffdhe2048" 00:11:49.239 } 00:11:49.239 } 00:11:49.239 ]' 00:11:49.239 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.239 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:49.239 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.239 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:49.239 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.239 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.239 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.239 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.498 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:11:49.498 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:11:50.433 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.433 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:50.433 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.433 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.433 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.433 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.433 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:50.433 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:50.691 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:50.691 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.691 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:50.691 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:50.691 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:50.691 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.691 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.691 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.691 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.691 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.691 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.691 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.691 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.948 00:11:50.948 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.948 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.948 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.515 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.515 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.515 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.515 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.515 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.515 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.515 { 00:11:51.515 "cntlid": 13, 00:11:51.515 "qid": 0, 00:11:51.515 "state": "enabled", 00:11:51.515 "thread": "nvmf_tgt_poll_group_000", 00:11:51.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:11:51.515 "listen_address": { 00:11:51.515 "trtype": "TCP", 00:11:51.515 "adrfam": "IPv4", 00:11:51.515 "traddr": "10.0.0.3", 00:11:51.515 "trsvcid": "4420" 00:11:51.515 }, 00:11:51.515 "peer_address": { 00:11:51.515 "trtype": "TCP", 00:11:51.515 "adrfam": "IPv4", 00:11:51.515 "traddr": "10.0.0.1", 00:11:51.515 "trsvcid": "60766" 00:11:51.515 }, 00:11:51.515 "auth": { 00:11:51.515 "state": "completed", 00:11:51.515 "digest": "sha256", 00:11:51.515 "dhgroup": "ffdhe2048" 00:11:51.515 } 00:11:51.515 } 00:11:51.515 ]' 00:11:51.515 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.515 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:51.515 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.515 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:51.515 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.515 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.515 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.515 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.080 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:11:52.080 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:11:52.650 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.650 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:52.650 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.650 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.650 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.650 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.650 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:52.650 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:52.920 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:52.920 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.920 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:52.920 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:52.920 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:52.920 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.920 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:11:52.920 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.920 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.920 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.920 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:52.920 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:52.920 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:53.486 00:11:53.486 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.486 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.486 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.744 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.744 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.744 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.744 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.744 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.744 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.744 { 00:11:53.744 "cntlid": 15, 00:11:53.744 "qid": 0, 00:11:53.744 "state": "enabled", 00:11:53.744 "thread": "nvmf_tgt_poll_group_000", 00:11:53.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:11:53.744 "listen_address": { 00:11:53.744 "trtype": "TCP", 00:11:53.744 "adrfam": "IPv4", 00:11:53.744 "traddr": "10.0.0.3", 00:11:53.744 "trsvcid": "4420" 00:11:53.744 }, 00:11:53.744 "peer_address": { 00:11:53.744 "trtype": "TCP", 00:11:53.744 "adrfam": "IPv4", 00:11:53.744 "traddr": "10.0.0.1", 00:11:53.744 "trsvcid": "60788" 00:11:53.744 }, 00:11:53.744 "auth": { 00:11:53.744 "state": "completed", 00:11:53.744 "digest": "sha256", 00:11:53.744 "dhgroup": "ffdhe2048" 00:11:53.744 } 00:11:53.744 } 00:11:53.744 ]' 00:11:53.744 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.744 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:53.744 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.744 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:53.744 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.744 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.744 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.744 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.079 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:11:54.079 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:11:54.645 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.904 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:54.904 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.904 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.904 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.904 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:54.904 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.904 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:54.904 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:55.162 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:55.162 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.162 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:55.162 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:55.162 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:55.162 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.162 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.162 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.162 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.162 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.162 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.162 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.162 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.728 00:11:55.728 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.728 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.728 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.728 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.728 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.728 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.728 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.728 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.986 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.986 { 00:11:55.986 "cntlid": 17, 00:11:55.986 "qid": 0, 00:11:55.986 "state": "enabled", 00:11:55.986 "thread": "nvmf_tgt_poll_group_000", 00:11:55.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:11:55.986 "listen_address": { 00:11:55.986 "trtype": "TCP", 00:11:55.986 "adrfam": "IPv4", 00:11:55.986 "traddr": "10.0.0.3", 00:11:55.986 "trsvcid": "4420" 00:11:55.986 }, 00:11:55.986 "peer_address": { 00:11:55.986 "trtype": "TCP", 00:11:55.986 "adrfam": "IPv4", 00:11:55.986 "traddr": "10.0.0.1", 00:11:55.986 "trsvcid": "60816" 00:11:55.986 }, 00:11:55.986 "auth": { 00:11:55.986 "state": "completed", 00:11:55.986 "digest": "sha256", 00:11:55.986 "dhgroup": "ffdhe3072" 00:11:55.986 } 00:11:55.986 } 00:11:55.986 ]' 00:11:55.987 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.987 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.987 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.987 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:55.987 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.987 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.987 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.987 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.245 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:11:56.245 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:11:57.180 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.180 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:57.180 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.180 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.180 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.180 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.180 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:57.180 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:57.438 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:57.438 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.438 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:57.438 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:57.438 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:57.438 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.438 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.438 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.438 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.438 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.438 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.438 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.438 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.696 00:11:57.696 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.696 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.696 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.955 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.955 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.955 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.955 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.955 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.955 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.955 { 00:11:57.955 "cntlid": 19, 00:11:57.955 "qid": 0, 00:11:57.955 "state": "enabled", 00:11:57.955 "thread": "nvmf_tgt_poll_group_000", 00:11:57.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:11:57.955 "listen_address": { 00:11:57.955 "trtype": "TCP", 00:11:57.955 "adrfam": "IPv4", 00:11:57.955 "traddr": "10.0.0.3", 00:11:57.955 "trsvcid": "4420" 00:11:57.955 }, 00:11:57.955 "peer_address": { 00:11:57.955 "trtype": "TCP", 00:11:57.955 "adrfam": "IPv4", 00:11:57.955 "traddr": "10.0.0.1", 00:11:57.955 "trsvcid": "32862" 00:11:57.955 }, 00:11:57.955 "auth": { 00:11:57.955 "state": "completed", 00:11:57.955 "digest": "sha256", 00:11:57.955 "dhgroup": "ffdhe3072" 00:11:57.955 } 00:11:57.955 } 00:11:57.955 ]' 00:11:57.955 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.214 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:58.214 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.214 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:58.214 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.214 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.214 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.214 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.472 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:11:58.472 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:11:59.409 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.409 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:11:59.409 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.409 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.409 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.409 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.409 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:59.409 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:59.667 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:59.667 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.667 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:59.667 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:59.667 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:59.667 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.667 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.667 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.667 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.667 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.667 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.667 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.667 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.252 00:12:00.252 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.252 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.252 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.526 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.526 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.526 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.526 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.526 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.526 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.526 { 00:12:00.526 "cntlid": 21, 00:12:00.526 "qid": 0, 00:12:00.526 "state": "enabled", 00:12:00.526 "thread": "nvmf_tgt_poll_group_000", 00:12:00.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:00.526 "listen_address": { 00:12:00.526 "trtype": "TCP", 00:12:00.526 "adrfam": "IPv4", 00:12:00.526 "traddr": "10.0.0.3", 00:12:00.526 "trsvcid": "4420" 00:12:00.526 }, 00:12:00.526 "peer_address": { 00:12:00.526 "trtype": "TCP", 00:12:00.526 "adrfam": "IPv4", 00:12:00.526 "traddr": "10.0.0.1", 00:12:00.526 "trsvcid": "32884" 00:12:00.526 }, 00:12:00.526 "auth": { 00:12:00.526 "state": "completed", 00:12:00.526 "digest": "sha256", 00:12:00.526 "dhgroup": "ffdhe3072" 00:12:00.526 } 00:12:00.526 } 00:12:00.526 ]' 00:12:00.526 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.526 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:00.526 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.526 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:00.526 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.526 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.526 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.526 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.785 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:12:00.785 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.720 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:01.721 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:01.721 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.288 00:12:02.288 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.288 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.288 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.547 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.547 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.547 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.547 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.547 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.547 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.547 { 00:12:02.547 "cntlid": 23, 00:12:02.547 "qid": 0, 00:12:02.547 "state": "enabled", 00:12:02.547 "thread": "nvmf_tgt_poll_group_000", 00:12:02.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:02.547 "listen_address": { 00:12:02.547 "trtype": "TCP", 00:12:02.547 "adrfam": "IPv4", 00:12:02.547 "traddr": "10.0.0.3", 00:12:02.547 "trsvcid": "4420" 00:12:02.547 }, 00:12:02.547 "peer_address": { 00:12:02.547 "trtype": "TCP", 00:12:02.547 "adrfam": "IPv4", 00:12:02.547 "traddr": "10.0.0.1", 00:12:02.547 "trsvcid": "32916" 00:12:02.547 }, 00:12:02.547 "auth": { 00:12:02.547 "state": "completed", 00:12:02.547 "digest": "sha256", 00:12:02.547 "dhgroup": "ffdhe3072" 00:12:02.547 } 00:12:02.547 } 00:12:02.548 ]' 00:12:02.548 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.548 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:02.548 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.548 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:02.548 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.548 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.548 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.548 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.114 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:12:03.114 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:12:03.681 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.681 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:03.681 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.681 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.681 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.681 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.681 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.681 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:03.681 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:03.939 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:03.939 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.939 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:03.939 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:03.939 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:03.939 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.940 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.940 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.940 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.940 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.940 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.940 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.940 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.506 00:12:04.506 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.506 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.506 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.765 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.765 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.765 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.765 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.765 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.765 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.765 { 00:12:04.765 "cntlid": 25, 00:12:04.765 "qid": 0, 00:12:04.765 "state": "enabled", 00:12:04.765 "thread": "nvmf_tgt_poll_group_000", 00:12:04.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:04.765 "listen_address": { 00:12:04.765 "trtype": "TCP", 00:12:04.765 "adrfam": "IPv4", 00:12:04.765 "traddr": "10.0.0.3", 00:12:04.765 "trsvcid": "4420" 00:12:04.765 }, 00:12:04.765 "peer_address": { 00:12:04.765 "trtype": "TCP", 00:12:04.765 "adrfam": "IPv4", 00:12:04.765 "traddr": "10.0.0.1", 00:12:04.765 "trsvcid": "32944" 00:12:04.765 }, 00:12:04.765 "auth": { 00:12:04.765 "state": "completed", 00:12:04.765 "digest": "sha256", 00:12:04.765 "dhgroup": "ffdhe4096" 00:12:04.765 } 00:12:04.765 } 00:12:04.765 ]' 00:12:04.765 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.765 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:04.765 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.765 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:04.765 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.765 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.765 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.765 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.331 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:12:05.331 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:12:05.898 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.898 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:05.898 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.898 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.898 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.898 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.898 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:05.898 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:06.157 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:06.157 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.157 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:06.157 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:06.157 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:06.157 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.157 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.157 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.157 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.157 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.157 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.157 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.157 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.723 00:12:06.723 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.723 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.723 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.981 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.981 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.981 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.981 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.981 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.981 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.981 { 00:12:06.981 "cntlid": 27, 00:12:06.981 "qid": 0, 00:12:06.981 "state": "enabled", 00:12:06.981 "thread": "nvmf_tgt_poll_group_000", 00:12:06.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:06.981 "listen_address": { 00:12:06.981 "trtype": "TCP", 00:12:06.981 "adrfam": "IPv4", 00:12:06.981 "traddr": "10.0.0.3", 00:12:06.981 "trsvcid": "4420" 00:12:06.981 }, 00:12:06.981 "peer_address": { 00:12:06.981 "trtype": "TCP", 00:12:06.981 "adrfam": "IPv4", 00:12:06.981 "traddr": "10.0.0.1", 00:12:06.981 "trsvcid": "58772" 00:12:06.981 }, 00:12:06.981 "auth": { 00:12:06.981 "state": "completed", 00:12:06.981 "digest": "sha256", 00:12:06.981 "dhgroup": "ffdhe4096" 00:12:06.981 } 00:12:06.981 } 00:12:06.981 ]' 00:12:06.981 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.981 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:06.981 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.981 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:06.981 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.239 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.239 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.239 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.497 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:12:07.497 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:12:08.062 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.062 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:08.062 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.062 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.062 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.062 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.062 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:08.062 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:08.320 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:08.320 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.320 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:08.320 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:08.320 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:08.320 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.320 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.320 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.320 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.320 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.320 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.320 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.320 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.886 00:12:08.886 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.886 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.886 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.144 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.144 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.144 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.144 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.144 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.144 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.144 { 00:12:09.144 "cntlid": 29, 00:12:09.144 "qid": 0, 00:12:09.144 "state": "enabled", 00:12:09.144 "thread": "nvmf_tgt_poll_group_000", 00:12:09.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:09.144 "listen_address": { 00:12:09.144 "trtype": "TCP", 00:12:09.144 "adrfam": "IPv4", 00:12:09.144 "traddr": "10.0.0.3", 00:12:09.144 "trsvcid": "4420" 00:12:09.144 }, 00:12:09.144 "peer_address": { 00:12:09.144 "trtype": "TCP", 00:12:09.145 "adrfam": "IPv4", 00:12:09.145 "traddr": "10.0.0.1", 00:12:09.145 "trsvcid": "58780" 00:12:09.145 }, 00:12:09.145 "auth": { 00:12:09.145 "state": "completed", 00:12:09.145 "digest": "sha256", 00:12:09.145 "dhgroup": "ffdhe4096" 00:12:09.145 } 00:12:09.145 } 00:12:09.145 ]' 00:12:09.145 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.145 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:09.145 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.145 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:09.145 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.145 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.145 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.145 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.709 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:12:09.709 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:12:10.275 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.275 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:10.275 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.275 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.275 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.275 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.275 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:10.275 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:10.532 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:10.532 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.532 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:10.532 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:10.532 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:10.532 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.532 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:12:10.532 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.532 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.532 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.532 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:10.532 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.532 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.790 00:12:10.790 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.790 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.790 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.048 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.048 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.048 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.048 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.307 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.307 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.307 { 00:12:11.307 "cntlid": 31, 00:12:11.307 "qid": 0, 00:12:11.307 "state": "enabled", 00:12:11.307 "thread": "nvmf_tgt_poll_group_000", 00:12:11.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:11.307 "listen_address": { 00:12:11.307 "trtype": "TCP", 00:12:11.307 "adrfam": "IPv4", 00:12:11.307 "traddr": "10.0.0.3", 00:12:11.307 "trsvcid": "4420" 00:12:11.307 }, 00:12:11.307 "peer_address": { 00:12:11.307 "trtype": "TCP", 00:12:11.307 "adrfam": "IPv4", 00:12:11.307 "traddr": "10.0.0.1", 00:12:11.307 "trsvcid": "58798" 00:12:11.307 }, 00:12:11.307 "auth": { 00:12:11.307 "state": "completed", 00:12:11.307 "digest": "sha256", 00:12:11.307 "dhgroup": "ffdhe4096" 00:12:11.307 } 00:12:11.307 } 00:12:11.307 ]' 00:12:11.307 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.307 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:11.307 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.307 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:11.307 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.307 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.307 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.307 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.565 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:12:11.565 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:12:12.501 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.501 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:12.501 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.501 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.501 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.501 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:12.501 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.501 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:12.501 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:12.501 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:12:12.501 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.501 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:12.501 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:12.501 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:12.501 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.501 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.501 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.501 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.759 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.759 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.759 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.759 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.017 00:12:13.017 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.017 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.017 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.276 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.276 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.276 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.276 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.276 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.276 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.276 { 00:12:13.276 "cntlid": 33, 00:12:13.276 "qid": 0, 00:12:13.276 "state": "enabled", 00:12:13.276 "thread": "nvmf_tgt_poll_group_000", 00:12:13.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:13.276 "listen_address": { 00:12:13.276 "trtype": "TCP", 00:12:13.276 "adrfam": "IPv4", 00:12:13.276 "traddr": "10.0.0.3", 00:12:13.276 "trsvcid": "4420" 00:12:13.276 }, 00:12:13.276 "peer_address": { 00:12:13.276 "trtype": "TCP", 00:12:13.276 "adrfam": "IPv4", 00:12:13.276 "traddr": "10.0.0.1", 00:12:13.276 "trsvcid": "58824" 00:12:13.276 }, 00:12:13.276 "auth": { 00:12:13.276 "state": "completed", 00:12:13.276 "digest": "sha256", 00:12:13.276 "dhgroup": "ffdhe6144" 00:12:13.276 } 00:12:13.276 } 00:12:13.276 ]' 00:12:13.276 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.535 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:13.535 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.535 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:13.535 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.535 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.535 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.535 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.792 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:12:13.792 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:12:14.357 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.615 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:14.615 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.615 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.615 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.615 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.615 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:14.615 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:14.873 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:12:14.873 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.873 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:14.873 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:14.873 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:14.873 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.873 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.873 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.873 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.873 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.873 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.873 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.873 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.439 00:12:15.439 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.439 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.439 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.698 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.698 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.698 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.698 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.698 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.698 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.698 { 00:12:15.698 "cntlid": 35, 00:12:15.698 "qid": 0, 00:12:15.698 "state": "enabled", 00:12:15.698 "thread": "nvmf_tgt_poll_group_000", 00:12:15.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:15.698 "listen_address": { 00:12:15.698 "trtype": "TCP", 00:12:15.698 "adrfam": "IPv4", 00:12:15.698 "traddr": "10.0.0.3", 00:12:15.698 "trsvcid": "4420" 00:12:15.698 }, 00:12:15.698 "peer_address": { 00:12:15.698 "trtype": "TCP", 00:12:15.698 "adrfam": "IPv4", 00:12:15.698 "traddr": "10.0.0.1", 00:12:15.698 "trsvcid": "58842" 00:12:15.698 }, 00:12:15.698 "auth": { 00:12:15.698 "state": "completed", 00:12:15.698 "digest": "sha256", 00:12:15.698 "dhgroup": "ffdhe6144" 00:12:15.698 } 00:12:15.698 } 00:12:15.698 ]' 00:12:15.698 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.698 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:15.698 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.698 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:15.698 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.698 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.698 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.698 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.266 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:12:16.266 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:12:16.852 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.852 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:16.852 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.852 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.852 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.852 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.852 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:16.852 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:17.110 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:12:17.110 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.110 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:17.110 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:17.110 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:17.110 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.110 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.110 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.110 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.110 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.110 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.110 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.110 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.677 00:12:17.677 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.677 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.677 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.935 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.935 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.935 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.935 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.935 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.935 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.935 { 00:12:17.935 "cntlid": 37, 00:12:17.935 "qid": 0, 00:12:17.935 "state": "enabled", 00:12:17.935 "thread": "nvmf_tgt_poll_group_000", 00:12:17.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:17.935 "listen_address": { 00:12:17.935 "trtype": "TCP", 00:12:17.935 "adrfam": "IPv4", 00:12:17.935 "traddr": "10.0.0.3", 00:12:17.935 "trsvcid": "4420" 00:12:17.935 }, 00:12:17.935 "peer_address": { 00:12:17.935 "trtype": "TCP", 00:12:17.935 "adrfam": "IPv4", 00:12:17.935 "traddr": "10.0.0.1", 00:12:17.935 "trsvcid": "52012" 00:12:17.935 }, 00:12:17.935 "auth": { 00:12:17.935 "state": "completed", 00:12:17.935 "digest": "sha256", 00:12:17.935 "dhgroup": "ffdhe6144" 00:12:17.935 } 00:12:17.935 } 00:12:17.935 ]' 00:12:17.935 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.935 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:17.935 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.192 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:18.192 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.192 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.192 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.192 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.450 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:12:18.450 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:12:19.016 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.016 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:19.016 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.016 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.016 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.016 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.016 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:19.017 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:19.275 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:12:19.275 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.275 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:19.275 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:19.275 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:19.275 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.275 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:12:19.275 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.275 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.275 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.275 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:19.275 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:19.275 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:19.841 00:12:19.841 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.841 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.841 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.099 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.099 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.099 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.099 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.099 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.099 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.099 { 00:12:20.099 "cntlid": 39, 00:12:20.099 "qid": 0, 00:12:20.099 "state": "enabled", 00:12:20.099 "thread": "nvmf_tgt_poll_group_000", 00:12:20.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:20.099 "listen_address": { 00:12:20.099 "trtype": "TCP", 00:12:20.099 "adrfam": "IPv4", 00:12:20.099 "traddr": "10.0.0.3", 00:12:20.099 "trsvcid": "4420" 00:12:20.099 }, 00:12:20.099 "peer_address": { 00:12:20.099 "trtype": "TCP", 00:12:20.099 "adrfam": "IPv4", 00:12:20.099 "traddr": "10.0.0.1", 00:12:20.099 "trsvcid": "52040" 00:12:20.099 }, 00:12:20.099 "auth": { 00:12:20.099 "state": "completed", 00:12:20.099 "digest": "sha256", 00:12:20.099 "dhgroup": "ffdhe6144" 00:12:20.099 } 00:12:20.099 } 00:12:20.099 ]' 00:12:20.099 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.099 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:20.099 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.099 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:20.099 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.357 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.357 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.357 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.615 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:12:20.615 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:12:21.181 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.181 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:21.181 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.181 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.181 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.181 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:21.181 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.181 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:21.181 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:21.745 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:12:21.745 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.745 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:21.745 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:21.745 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:21.745 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.745 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.745 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.745 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.745 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.746 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.746 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.746 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.311 00:12:22.312 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.312 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.312 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.570 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.570 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.570 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.570 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.570 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.570 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.570 { 00:12:22.570 "cntlid": 41, 00:12:22.570 "qid": 0, 00:12:22.570 "state": "enabled", 00:12:22.570 "thread": "nvmf_tgt_poll_group_000", 00:12:22.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:22.571 "listen_address": { 00:12:22.571 "trtype": "TCP", 00:12:22.571 "adrfam": "IPv4", 00:12:22.571 "traddr": "10.0.0.3", 00:12:22.571 "trsvcid": "4420" 00:12:22.571 }, 00:12:22.571 "peer_address": { 00:12:22.571 "trtype": "TCP", 00:12:22.571 "adrfam": "IPv4", 00:12:22.571 "traddr": "10.0.0.1", 00:12:22.571 "trsvcid": "52064" 00:12:22.571 }, 00:12:22.571 "auth": { 00:12:22.571 "state": "completed", 00:12:22.571 "digest": "sha256", 00:12:22.571 "dhgroup": "ffdhe8192" 00:12:22.571 } 00:12:22.571 } 00:12:22.571 ]' 00:12:22.571 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.571 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:22.571 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.571 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:22.571 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.829 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.829 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.829 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.086 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:12:23.086 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:12:23.653 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.653 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:23.653 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.653 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.653 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.653 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.653 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:23.653 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:23.911 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:12:23.911 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.911 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:23.911 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:23.911 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:23.911 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.911 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.911 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.911 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.911 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.911 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.911 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.911 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.477 00:12:24.477 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.477 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.477 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.735 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.735 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.735 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.735 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.735 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.735 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.735 { 00:12:24.735 "cntlid": 43, 00:12:24.735 "qid": 0, 00:12:24.735 "state": "enabled", 00:12:24.735 "thread": "nvmf_tgt_poll_group_000", 00:12:24.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:24.735 "listen_address": { 00:12:24.735 "trtype": "TCP", 00:12:24.735 "adrfam": "IPv4", 00:12:24.735 "traddr": "10.0.0.3", 00:12:24.735 "trsvcid": "4420" 00:12:24.735 }, 00:12:24.735 "peer_address": { 00:12:24.735 "trtype": "TCP", 00:12:24.735 "adrfam": "IPv4", 00:12:24.735 "traddr": "10.0.0.1", 00:12:24.735 "trsvcid": "52096" 00:12:24.735 }, 00:12:24.735 "auth": { 00:12:24.735 "state": "completed", 00:12:24.735 "digest": "sha256", 00:12:24.735 "dhgroup": "ffdhe8192" 00:12:24.735 } 00:12:24.735 } 00:12:24.735 ]' 00:12:24.735 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.993 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:24.993 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.993 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:24.993 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.993 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.993 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.993 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.252 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:12:25.252 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:12:26.188 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.188 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:26.188 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.188 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.188 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.188 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.188 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:26.188 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:26.446 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:12:26.446 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.446 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:26.446 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:26.446 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:26.446 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.446 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.446 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.446 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.446 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.446 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.447 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.447 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.063 00:12:27.063 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.063 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.063 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.322 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.322 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.322 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.322 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.322 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.322 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.322 { 00:12:27.322 "cntlid": 45, 00:12:27.322 "qid": 0, 00:12:27.322 "state": "enabled", 00:12:27.322 "thread": "nvmf_tgt_poll_group_000", 00:12:27.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:27.322 "listen_address": { 00:12:27.322 "trtype": "TCP", 00:12:27.322 "adrfam": "IPv4", 00:12:27.322 "traddr": "10.0.0.3", 00:12:27.322 "trsvcid": "4420" 00:12:27.322 }, 00:12:27.322 "peer_address": { 00:12:27.322 "trtype": "TCP", 00:12:27.322 "adrfam": "IPv4", 00:12:27.322 "traddr": "10.0.0.1", 00:12:27.322 "trsvcid": "37084" 00:12:27.322 }, 00:12:27.322 "auth": { 00:12:27.322 "state": "completed", 00:12:27.322 "digest": "sha256", 00:12:27.322 "dhgroup": "ffdhe8192" 00:12:27.322 } 00:12:27.322 } 00:12:27.322 ]' 00:12:27.322 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.322 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:27.322 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.581 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:27.581 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.581 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.581 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.581 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.844 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:12:27.844 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:12:28.411 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.411 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:28.411 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.411 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.411 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.411 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.411 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:28.411 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:28.988 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:28.988 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.988 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:28.988 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:28.988 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:28.988 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.988 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:12:28.988 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.988 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.988 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.988 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:28.988 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.989 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:29.555 00:12:29.555 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.555 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.555 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.813 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.814 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.814 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.814 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.814 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.814 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.814 { 00:12:29.814 "cntlid": 47, 00:12:29.814 "qid": 0, 00:12:29.814 "state": "enabled", 00:12:29.814 "thread": "nvmf_tgt_poll_group_000", 00:12:29.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:29.814 "listen_address": { 00:12:29.814 "trtype": "TCP", 00:12:29.814 "adrfam": "IPv4", 00:12:29.814 "traddr": "10.0.0.3", 00:12:29.814 "trsvcid": "4420" 00:12:29.814 }, 00:12:29.814 "peer_address": { 00:12:29.814 "trtype": "TCP", 00:12:29.814 "adrfam": "IPv4", 00:12:29.814 "traddr": "10.0.0.1", 00:12:29.814 "trsvcid": "37124" 00:12:29.814 }, 00:12:29.814 "auth": { 00:12:29.814 "state": "completed", 00:12:29.814 "digest": "sha256", 00:12:29.814 "dhgroup": "ffdhe8192" 00:12:29.814 } 00:12:29.814 } 00:12:29.814 ]' 00:12:29.814 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.814 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:29.814 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.071 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:30.071 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.071 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.071 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.071 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.329 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:12:30.329 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:12:30.894 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.894 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:30.894 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.894 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.894 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.894 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:30.894 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:30.894 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.894 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:30.894 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:31.458 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:31.458 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.458 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:31.458 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:31.458 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:31.458 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.458 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.458 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.458 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.458 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.458 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.458 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.458 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.714 00:12:31.714 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.714 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.714 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.971 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.971 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.971 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.971 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.971 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.971 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.971 { 00:12:31.971 "cntlid": 49, 00:12:31.971 "qid": 0, 00:12:31.971 "state": "enabled", 00:12:31.971 "thread": "nvmf_tgt_poll_group_000", 00:12:31.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:31.971 "listen_address": { 00:12:31.971 "trtype": "TCP", 00:12:31.971 "adrfam": "IPv4", 00:12:31.971 "traddr": "10.0.0.3", 00:12:31.971 "trsvcid": "4420" 00:12:31.971 }, 00:12:31.971 "peer_address": { 00:12:31.971 "trtype": "TCP", 00:12:31.971 "adrfam": "IPv4", 00:12:31.971 "traddr": "10.0.0.1", 00:12:31.971 "trsvcid": "37148" 00:12:31.971 }, 00:12:31.971 "auth": { 00:12:31.971 "state": "completed", 00:12:31.971 "digest": "sha384", 00:12:31.971 "dhgroup": "null" 00:12:31.971 } 00:12:31.971 } 00:12:31.971 ]' 00:12:31.971 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.971 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.971 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.971 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:31.971 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.971 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.971 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.971 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.229 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:12:32.229 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:12:33.163 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.163 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:33.163 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.163 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.163 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.163 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.163 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:33.163 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:33.422 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:33.422 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.422 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:33.422 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:33.422 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:33.422 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.422 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.422 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.422 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.422 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.422 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.422 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.422 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.680 00:12:33.680 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.680 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.680 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.939 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.939 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.939 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.939 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.939 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.939 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.939 { 00:12:33.939 "cntlid": 51, 00:12:33.939 "qid": 0, 00:12:33.939 "state": "enabled", 00:12:33.939 "thread": "nvmf_tgt_poll_group_000", 00:12:33.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:33.939 "listen_address": { 00:12:33.939 "trtype": "TCP", 00:12:33.939 "adrfam": "IPv4", 00:12:33.939 "traddr": "10.0.0.3", 00:12:33.939 "trsvcid": "4420" 00:12:33.939 }, 00:12:33.939 "peer_address": { 00:12:33.939 "trtype": "TCP", 00:12:33.939 "adrfam": "IPv4", 00:12:33.939 "traddr": "10.0.0.1", 00:12:33.939 "trsvcid": "37182" 00:12:33.939 }, 00:12:33.939 "auth": { 00:12:33.939 "state": "completed", 00:12:33.939 "digest": "sha384", 00:12:33.939 "dhgroup": "null" 00:12:33.939 } 00:12:33.939 } 00:12:33.939 ]' 00:12:33.939 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.197 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:34.197 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.197 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:34.197 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.197 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.197 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.197 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.455 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:12:34.455 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:12:35.020 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.020 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:35.020 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.020 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.020 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.020 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.020 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:35.020 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:35.278 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:35.278 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.278 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:35.278 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:35.278 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:35.278 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.278 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.278 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.278 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.536 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.536 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.536 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.536 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.793 00:12:35.793 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.793 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.793 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.051 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.051 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.051 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.051 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.051 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.051 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.051 { 00:12:36.051 "cntlid": 53, 00:12:36.051 "qid": 0, 00:12:36.051 "state": "enabled", 00:12:36.051 "thread": "nvmf_tgt_poll_group_000", 00:12:36.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:36.051 "listen_address": { 00:12:36.051 "trtype": "TCP", 00:12:36.051 "adrfam": "IPv4", 00:12:36.051 "traddr": "10.0.0.3", 00:12:36.051 "trsvcid": "4420" 00:12:36.051 }, 00:12:36.051 "peer_address": { 00:12:36.051 "trtype": "TCP", 00:12:36.051 "adrfam": "IPv4", 00:12:36.051 "traddr": "10.0.0.1", 00:12:36.051 "trsvcid": "37198" 00:12:36.051 }, 00:12:36.051 "auth": { 00:12:36.051 "state": "completed", 00:12:36.051 "digest": "sha384", 00:12:36.051 "dhgroup": "null" 00:12:36.051 } 00:12:36.051 } 00:12:36.051 ]' 00:12:36.051 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.051 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.051 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.051 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:36.051 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.051 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.051 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.051 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.616 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:12:36.616 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:12:37.181 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.181 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:37.181 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.181 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.181 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.181 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.181 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:37.181 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:37.438 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:37.438 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.438 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:37.438 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:37.438 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:37.438 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.438 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:12:37.438 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.438 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.438 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.438 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:37.438 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:37.438 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.003 00:12:38.003 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.003 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.003 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.260 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.260 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.260 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.260 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.260 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.260 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.260 { 00:12:38.260 "cntlid": 55, 00:12:38.260 "qid": 0, 00:12:38.260 "state": "enabled", 00:12:38.260 "thread": "nvmf_tgt_poll_group_000", 00:12:38.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:38.260 "listen_address": { 00:12:38.260 "trtype": "TCP", 00:12:38.260 "adrfam": "IPv4", 00:12:38.260 "traddr": "10.0.0.3", 00:12:38.260 "trsvcid": "4420" 00:12:38.260 }, 00:12:38.260 "peer_address": { 00:12:38.260 "trtype": "TCP", 00:12:38.260 "adrfam": "IPv4", 00:12:38.260 "traddr": "10.0.0.1", 00:12:38.260 "trsvcid": "57016" 00:12:38.260 }, 00:12:38.260 "auth": { 00:12:38.260 "state": "completed", 00:12:38.260 "digest": "sha384", 00:12:38.260 "dhgroup": "null" 00:12:38.260 } 00:12:38.260 } 00:12:38.260 ]' 00:12:38.260 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.260 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.260 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.260 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:38.260 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.260 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.260 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.260 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.518 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:12:38.518 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:12:39.451 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.451 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:39.451 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.451 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.451 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.451 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:39.451 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.451 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:39.451 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:39.451 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:39.451 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.451 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:39.451 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:39.451 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:39.451 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.451 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.451 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.451 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.451 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.451 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.451 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.451 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.016 00:12:40.016 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.016 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.016 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.274 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.274 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.274 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.274 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.274 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.274 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.274 { 00:12:40.274 "cntlid": 57, 00:12:40.274 "qid": 0, 00:12:40.274 "state": "enabled", 00:12:40.274 "thread": "nvmf_tgt_poll_group_000", 00:12:40.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:40.274 "listen_address": { 00:12:40.274 "trtype": "TCP", 00:12:40.274 "adrfam": "IPv4", 00:12:40.274 "traddr": "10.0.0.3", 00:12:40.274 "trsvcid": "4420" 00:12:40.274 }, 00:12:40.274 "peer_address": { 00:12:40.274 "trtype": "TCP", 00:12:40.274 "adrfam": "IPv4", 00:12:40.274 "traddr": "10.0.0.1", 00:12:40.274 "trsvcid": "57056" 00:12:40.274 }, 00:12:40.274 "auth": { 00:12:40.274 "state": "completed", 00:12:40.274 "digest": "sha384", 00:12:40.274 "dhgroup": "ffdhe2048" 00:12:40.274 } 00:12:40.274 } 00:12:40.274 ]' 00:12:40.274 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.274 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.274 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.274 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:40.274 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.274 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.274 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.274 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.532 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:12:40.532 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:12:41.495 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.495 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:41.495 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.495 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.495 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.496 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.496 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:41.496 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:41.759 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:41.759 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.759 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:41.760 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:41.760 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:41.760 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.760 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.760 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.760 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.760 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.760 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.760 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.760 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.020 00:12:42.020 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.020 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.020 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.276 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.277 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.277 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.277 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.277 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.277 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.277 { 00:12:42.277 "cntlid": 59, 00:12:42.277 "qid": 0, 00:12:42.277 "state": "enabled", 00:12:42.277 "thread": "nvmf_tgt_poll_group_000", 00:12:42.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:42.277 "listen_address": { 00:12:42.277 "trtype": "TCP", 00:12:42.277 "adrfam": "IPv4", 00:12:42.277 "traddr": "10.0.0.3", 00:12:42.277 "trsvcid": "4420" 00:12:42.277 }, 00:12:42.277 "peer_address": { 00:12:42.277 "trtype": "TCP", 00:12:42.277 "adrfam": "IPv4", 00:12:42.277 "traddr": "10.0.0.1", 00:12:42.277 "trsvcid": "57082" 00:12:42.277 }, 00:12:42.277 "auth": { 00:12:42.277 "state": "completed", 00:12:42.277 "digest": "sha384", 00:12:42.277 "dhgroup": "ffdhe2048" 00:12:42.277 } 00:12:42.277 } 00:12:42.277 ]' 00:12:42.277 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.277 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.277 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.277 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:42.277 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.534 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.534 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.534 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.791 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:12:42.791 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:12:43.356 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.356 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:43.356 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.356 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.356 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.356 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.356 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:43.356 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:43.921 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:43.921 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.921 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:43.921 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:43.921 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:43.921 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.921 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.921 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.921 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.921 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.921 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.921 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.921 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.179 00:12:44.179 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.179 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.179 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.437 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.437 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.437 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.437 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.437 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.437 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.437 { 00:12:44.437 "cntlid": 61, 00:12:44.437 "qid": 0, 00:12:44.437 "state": "enabled", 00:12:44.437 "thread": "nvmf_tgt_poll_group_000", 00:12:44.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:44.437 "listen_address": { 00:12:44.437 "trtype": "TCP", 00:12:44.437 "adrfam": "IPv4", 00:12:44.437 "traddr": "10.0.0.3", 00:12:44.437 "trsvcid": "4420" 00:12:44.437 }, 00:12:44.437 "peer_address": { 00:12:44.437 "trtype": "TCP", 00:12:44.437 "adrfam": "IPv4", 00:12:44.437 "traddr": "10.0.0.1", 00:12:44.437 "trsvcid": "57116" 00:12:44.437 }, 00:12:44.437 "auth": { 00:12:44.437 "state": "completed", 00:12:44.437 "digest": "sha384", 00:12:44.437 "dhgroup": "ffdhe2048" 00:12:44.437 } 00:12:44.437 } 00:12:44.437 ]' 00:12:44.437 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.437 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:44.437 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.437 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:44.437 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.694 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.695 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.695 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.953 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:12:44.953 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:12:45.616 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.616 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:45.616 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.616 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.616 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.616 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.616 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:45.616 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:45.902 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:45.902 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.902 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:45.902 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:45.902 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:45.902 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.902 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:12:45.902 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.902 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.902 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.902 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:45.902 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:45.902 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:46.160 00:12:46.160 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.160 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.160 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.419 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.419 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.419 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.419 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.419 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.419 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.419 { 00:12:46.419 "cntlid": 63, 00:12:46.419 "qid": 0, 00:12:46.419 "state": "enabled", 00:12:46.419 "thread": "nvmf_tgt_poll_group_000", 00:12:46.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:46.419 "listen_address": { 00:12:46.419 "trtype": "TCP", 00:12:46.419 "adrfam": "IPv4", 00:12:46.419 "traddr": "10.0.0.3", 00:12:46.419 "trsvcid": "4420" 00:12:46.419 }, 00:12:46.419 "peer_address": { 00:12:46.419 "trtype": "TCP", 00:12:46.419 "adrfam": "IPv4", 00:12:46.419 "traddr": "10.0.0.1", 00:12:46.419 "trsvcid": "43716" 00:12:46.419 }, 00:12:46.419 "auth": { 00:12:46.419 "state": "completed", 00:12:46.419 "digest": "sha384", 00:12:46.419 "dhgroup": "ffdhe2048" 00:12:46.419 } 00:12:46.419 } 00:12:46.419 ]' 00:12:46.419 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.682 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:46.682 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.682 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:46.682 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.682 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.682 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.682 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.939 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:12:46.939 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:12:47.874 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.874 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:47.874 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.874 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.874 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.874 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:47.874 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.874 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:47.875 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:47.875 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:47.875 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.875 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:47.875 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:47.875 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:47.875 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.875 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.875 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.875 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.875 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.875 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.875 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.875 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.441 00:12:48.441 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.441 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.441 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.698 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.698 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.698 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.698 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.698 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.698 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.698 { 00:12:48.698 "cntlid": 65, 00:12:48.698 "qid": 0, 00:12:48.698 "state": "enabled", 00:12:48.698 "thread": "nvmf_tgt_poll_group_000", 00:12:48.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:48.699 "listen_address": { 00:12:48.699 "trtype": "TCP", 00:12:48.699 "adrfam": "IPv4", 00:12:48.699 "traddr": "10.0.0.3", 00:12:48.699 "trsvcid": "4420" 00:12:48.699 }, 00:12:48.699 "peer_address": { 00:12:48.699 "trtype": "TCP", 00:12:48.699 "adrfam": "IPv4", 00:12:48.699 "traddr": "10.0.0.1", 00:12:48.699 "trsvcid": "43746" 00:12:48.699 }, 00:12:48.699 "auth": { 00:12:48.699 "state": "completed", 00:12:48.699 "digest": "sha384", 00:12:48.699 "dhgroup": "ffdhe3072" 00:12:48.699 } 00:12:48.699 } 00:12:48.699 ]' 00:12:48.699 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.699 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:48.699 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.957 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:48.957 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.957 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.957 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.957 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.215 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:12:49.215 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.150 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.408 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.408 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.408 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.408 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.667 00:12:50.667 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.667 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.667 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.927 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.927 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.927 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.927 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.927 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.927 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.927 { 00:12:50.927 "cntlid": 67, 00:12:50.927 "qid": 0, 00:12:50.927 "state": "enabled", 00:12:50.927 "thread": "nvmf_tgt_poll_group_000", 00:12:50.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:50.927 "listen_address": { 00:12:50.927 "trtype": "TCP", 00:12:50.927 "adrfam": "IPv4", 00:12:50.927 "traddr": "10.0.0.3", 00:12:50.927 "trsvcid": "4420" 00:12:50.927 }, 00:12:50.927 "peer_address": { 00:12:50.927 "trtype": "TCP", 00:12:50.927 "adrfam": "IPv4", 00:12:50.927 "traddr": "10.0.0.1", 00:12:50.927 "trsvcid": "43786" 00:12:50.927 }, 00:12:50.927 "auth": { 00:12:50.927 "state": "completed", 00:12:50.927 "digest": "sha384", 00:12:50.927 "dhgroup": "ffdhe3072" 00:12:50.927 } 00:12:50.927 } 00:12:50.927 ]' 00:12:50.927 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.186 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:51.186 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.186 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:51.186 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.186 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.186 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.187 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.445 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:12:51.445 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:12:52.379 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.379 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:52.379 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.379 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.379 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.379 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.379 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:52.379 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:52.638 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:52.638 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.638 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:52.638 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:52.638 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:52.638 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.638 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.638 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.638 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.638 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.638 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.638 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.638 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.897 00:12:52.897 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.897 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.897 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.178 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.178 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.178 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.178 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.178 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.178 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.178 { 00:12:53.178 "cntlid": 69, 00:12:53.178 "qid": 0, 00:12:53.178 "state": "enabled", 00:12:53.178 "thread": "nvmf_tgt_poll_group_000", 00:12:53.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:53.178 "listen_address": { 00:12:53.178 "trtype": "TCP", 00:12:53.178 "adrfam": "IPv4", 00:12:53.178 "traddr": "10.0.0.3", 00:12:53.178 "trsvcid": "4420" 00:12:53.178 }, 00:12:53.178 "peer_address": { 00:12:53.178 "trtype": "TCP", 00:12:53.178 "adrfam": "IPv4", 00:12:53.178 "traddr": "10.0.0.1", 00:12:53.178 "trsvcid": "43820" 00:12:53.178 }, 00:12:53.178 "auth": { 00:12:53.178 "state": "completed", 00:12:53.178 "digest": "sha384", 00:12:53.178 "dhgroup": "ffdhe3072" 00:12:53.178 } 00:12:53.178 } 00:12:53.178 ]' 00:12:53.178 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.178 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:53.178 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.437 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:53.437 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.437 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.437 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.437 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.695 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:12:53.695 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:12:54.633 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.633 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:54.633 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.633 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.633 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.633 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.633 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:54.633 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:54.633 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:54.633 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.633 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:54.633 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:54.633 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:54.633 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.633 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:12:54.633 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.633 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.633 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.633 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:54.633 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.633 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:55.201 00:12:55.201 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.201 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.201 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.459 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.459 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.459 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.459 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.459 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.459 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.459 { 00:12:55.459 "cntlid": 71, 00:12:55.459 "qid": 0, 00:12:55.459 "state": "enabled", 00:12:55.459 "thread": "nvmf_tgt_poll_group_000", 00:12:55.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:55.459 "listen_address": { 00:12:55.459 "trtype": "TCP", 00:12:55.459 "adrfam": "IPv4", 00:12:55.459 "traddr": "10.0.0.3", 00:12:55.459 "trsvcid": "4420" 00:12:55.459 }, 00:12:55.459 "peer_address": { 00:12:55.459 "trtype": "TCP", 00:12:55.459 "adrfam": "IPv4", 00:12:55.459 "traddr": "10.0.0.1", 00:12:55.459 "trsvcid": "43846" 00:12:55.459 }, 00:12:55.459 "auth": { 00:12:55.459 "state": "completed", 00:12:55.459 "digest": "sha384", 00:12:55.459 "dhgroup": "ffdhe3072" 00:12:55.459 } 00:12:55.459 } 00:12:55.459 ]' 00:12:55.459 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.459 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:55.460 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.460 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:55.460 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.460 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.460 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.460 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.718 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:12:55.718 09:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:12:56.655 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.655 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:56.655 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.655 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.655 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.655 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:56.655 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.655 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:56.655 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:56.914 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:56.914 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.914 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:56.914 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:56.914 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:56.914 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.914 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.914 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.914 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.914 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.914 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.914 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.914 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.248 00:12:57.248 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.248 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.248 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.510 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.510 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.510 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.510 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.510 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.510 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.510 { 00:12:57.510 "cntlid": 73, 00:12:57.510 "qid": 0, 00:12:57.510 "state": "enabled", 00:12:57.510 "thread": "nvmf_tgt_poll_group_000", 00:12:57.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:57.510 "listen_address": { 00:12:57.510 "trtype": "TCP", 00:12:57.510 "adrfam": "IPv4", 00:12:57.510 "traddr": "10.0.0.3", 00:12:57.510 "trsvcid": "4420" 00:12:57.510 }, 00:12:57.510 "peer_address": { 00:12:57.510 "trtype": "TCP", 00:12:57.510 "adrfam": "IPv4", 00:12:57.510 "traddr": "10.0.0.1", 00:12:57.510 "trsvcid": "38058" 00:12:57.510 }, 00:12:57.510 "auth": { 00:12:57.510 "state": "completed", 00:12:57.510 "digest": "sha384", 00:12:57.510 "dhgroup": "ffdhe4096" 00:12:57.510 } 00:12:57.510 } 00:12:57.510 ]' 00:12:57.510 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.771 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:57.771 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.771 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:57.771 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.771 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.771 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.771 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.031 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:12:58.031 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:12:58.598 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.598 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:12:58.598 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.599 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.599 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.599 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.599 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:58.599 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:59.165 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:59.165 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.165 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:59.165 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:59.165 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:59.165 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.165 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.165 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.165 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.165 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.165 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.165 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.165 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.423 00:12:59.423 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.423 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.423 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.681 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.681 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.681 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.681 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.681 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.681 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.681 { 00:12:59.681 "cntlid": 75, 00:12:59.681 "qid": 0, 00:12:59.681 "state": "enabled", 00:12:59.681 "thread": "nvmf_tgt_poll_group_000", 00:12:59.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:12:59.681 "listen_address": { 00:12:59.681 "trtype": "TCP", 00:12:59.681 "adrfam": "IPv4", 00:12:59.681 "traddr": "10.0.0.3", 00:12:59.681 "trsvcid": "4420" 00:12:59.681 }, 00:12:59.681 "peer_address": { 00:12:59.681 "trtype": "TCP", 00:12:59.681 "adrfam": "IPv4", 00:12:59.681 "traddr": "10.0.0.1", 00:12:59.681 "trsvcid": "38076" 00:12:59.681 }, 00:12:59.681 "auth": { 00:12:59.681 "state": "completed", 00:12:59.681 "digest": "sha384", 00:12:59.681 "dhgroup": "ffdhe4096" 00:12:59.681 } 00:12:59.681 } 00:12:59.681 ]' 00:12:59.681 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.681 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:59.681 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.939 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:59.939 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.939 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.939 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.939 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.197 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:13:00.197 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.132 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.707 00:13:01.707 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.707 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.707 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.965 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.965 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.965 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.965 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.965 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.965 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.965 { 00:13:01.965 "cntlid": 77, 00:13:01.965 "qid": 0, 00:13:01.965 "state": "enabled", 00:13:01.965 "thread": "nvmf_tgt_poll_group_000", 00:13:01.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:01.965 "listen_address": { 00:13:01.965 "trtype": "TCP", 00:13:01.965 "adrfam": "IPv4", 00:13:01.965 "traddr": "10.0.0.3", 00:13:01.965 "trsvcid": "4420" 00:13:01.965 }, 00:13:01.965 "peer_address": { 00:13:01.965 "trtype": "TCP", 00:13:01.965 "adrfam": "IPv4", 00:13:01.965 "traddr": "10.0.0.1", 00:13:01.965 "trsvcid": "38088" 00:13:01.965 }, 00:13:01.965 "auth": { 00:13:01.965 "state": "completed", 00:13:01.965 "digest": "sha384", 00:13:01.965 "dhgroup": "ffdhe4096" 00:13:01.965 } 00:13:01.965 } 00:13:01.965 ]' 00:13:01.965 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.965 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:01.965 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.965 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:01.965 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.224 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.224 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.224 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.482 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:13:02.482 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:13:03.050 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.050 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:03.050 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.050 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.050 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.050 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.050 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:03.050 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:03.617 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:03.617 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.617 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:03.617 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:03.617 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:03.617 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.617 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:13:03.617 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.617 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.617 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.617 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:03.617 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:03.617 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:03.875 00:13:03.875 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.875 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.875 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.132 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.132 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.132 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.132 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.132 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.132 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.132 { 00:13:04.132 "cntlid": 79, 00:13:04.132 "qid": 0, 00:13:04.132 "state": "enabled", 00:13:04.132 "thread": "nvmf_tgt_poll_group_000", 00:13:04.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:04.132 "listen_address": { 00:13:04.132 "trtype": "TCP", 00:13:04.132 "adrfam": "IPv4", 00:13:04.132 "traddr": "10.0.0.3", 00:13:04.132 "trsvcid": "4420" 00:13:04.132 }, 00:13:04.132 "peer_address": { 00:13:04.132 "trtype": "TCP", 00:13:04.132 "adrfam": "IPv4", 00:13:04.132 "traddr": "10.0.0.1", 00:13:04.132 "trsvcid": "38106" 00:13:04.133 }, 00:13:04.133 "auth": { 00:13:04.133 "state": "completed", 00:13:04.133 "digest": "sha384", 00:13:04.133 "dhgroup": "ffdhe4096" 00:13:04.133 } 00:13:04.133 } 00:13:04.133 ]' 00:13:04.133 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.133 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:04.133 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.391 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:04.391 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.391 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.391 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.391 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.649 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:13:04.649 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:13:05.620 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.620 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:05.620 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.620 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.620 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.620 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:05.620 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.620 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:05.620 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:05.620 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:05.620 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.620 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:05.620 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:05.620 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:05.620 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.620 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.620 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.620 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.620 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.620 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.620 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.620 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.188 00:13:06.188 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.188 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.188 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.447 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.447 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.447 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.447 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.447 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.447 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.447 { 00:13:06.447 "cntlid": 81, 00:13:06.447 "qid": 0, 00:13:06.447 "state": "enabled", 00:13:06.447 "thread": "nvmf_tgt_poll_group_000", 00:13:06.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:06.447 "listen_address": { 00:13:06.447 "trtype": "TCP", 00:13:06.447 "adrfam": "IPv4", 00:13:06.447 "traddr": "10.0.0.3", 00:13:06.447 "trsvcid": "4420" 00:13:06.447 }, 00:13:06.447 "peer_address": { 00:13:06.447 "trtype": "TCP", 00:13:06.447 "adrfam": "IPv4", 00:13:06.447 "traddr": "10.0.0.1", 00:13:06.447 "trsvcid": "42882" 00:13:06.447 }, 00:13:06.447 "auth": { 00:13:06.447 "state": "completed", 00:13:06.447 "digest": "sha384", 00:13:06.447 "dhgroup": "ffdhe6144" 00:13:06.447 } 00:13:06.447 } 00:13:06.447 ]' 00:13:06.447 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.447 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:06.447 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.705 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:06.705 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.705 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.705 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.705 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.964 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:13:06.964 09:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:13:07.530 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.530 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:07.530 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.530 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.530 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.530 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.530 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:07.531 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:08.099 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:08.099 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.099 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:08.099 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:08.099 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:08.099 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.099 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.099 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.099 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.099 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.099 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.099 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.099 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.357 00:13:08.357 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.357 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.357 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.615 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.615 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.615 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.615 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.615 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.615 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.615 { 00:13:08.615 "cntlid": 83, 00:13:08.615 "qid": 0, 00:13:08.615 "state": "enabled", 00:13:08.615 "thread": "nvmf_tgt_poll_group_000", 00:13:08.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:08.615 "listen_address": { 00:13:08.615 "trtype": "TCP", 00:13:08.615 "adrfam": "IPv4", 00:13:08.615 "traddr": "10.0.0.3", 00:13:08.615 "trsvcid": "4420" 00:13:08.615 }, 00:13:08.615 "peer_address": { 00:13:08.615 "trtype": "TCP", 00:13:08.615 "adrfam": "IPv4", 00:13:08.615 "traddr": "10.0.0.1", 00:13:08.615 "trsvcid": "42912" 00:13:08.615 }, 00:13:08.615 "auth": { 00:13:08.615 "state": "completed", 00:13:08.615 "digest": "sha384", 00:13:08.615 "dhgroup": "ffdhe6144" 00:13:08.615 } 00:13:08.615 } 00:13:08.616 ]' 00:13:08.901 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.901 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:08.901 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.901 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:08.901 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.901 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.901 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.901 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.159 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:13:09.159 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.092 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.093 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.093 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.093 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.093 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.658 00:13:10.658 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.658 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.658 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.918 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.918 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.918 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.918 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.918 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.918 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.918 { 00:13:10.918 "cntlid": 85, 00:13:10.918 "qid": 0, 00:13:10.918 "state": "enabled", 00:13:10.918 "thread": "nvmf_tgt_poll_group_000", 00:13:10.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:10.918 "listen_address": { 00:13:10.918 "trtype": "TCP", 00:13:10.918 "adrfam": "IPv4", 00:13:10.918 "traddr": "10.0.0.3", 00:13:10.918 "trsvcid": "4420" 00:13:10.918 }, 00:13:10.918 "peer_address": { 00:13:10.918 "trtype": "TCP", 00:13:10.918 "adrfam": "IPv4", 00:13:10.918 "traddr": "10.0.0.1", 00:13:10.918 "trsvcid": "42938" 00:13:10.918 }, 00:13:10.918 "auth": { 00:13:10.918 "state": "completed", 00:13:10.918 "digest": "sha384", 00:13:10.918 "dhgroup": "ffdhe6144" 00:13:10.918 } 00:13:10.918 } 00:13:10.918 ]' 00:13:11.178 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.178 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:11.178 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.178 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:11.178 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.178 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.178 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.178 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.436 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:13:11.436 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:13:12.369 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.369 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:12.369 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.369 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.369 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.369 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.369 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:12.369 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:12.626 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:12.626 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:12.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:12.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:12.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:13:12.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.627 09:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.627 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.627 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:12.627 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:12.627 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:12.885 00:13:12.885 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.885 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.885 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.143 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.143 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.143 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.143 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.400 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.400 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.400 { 00:13:13.400 "cntlid": 87, 00:13:13.400 "qid": 0, 00:13:13.400 "state": "enabled", 00:13:13.400 "thread": "nvmf_tgt_poll_group_000", 00:13:13.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:13.400 "listen_address": { 00:13:13.400 "trtype": "TCP", 00:13:13.400 "adrfam": "IPv4", 00:13:13.400 "traddr": "10.0.0.3", 00:13:13.400 "trsvcid": "4420" 00:13:13.400 }, 00:13:13.400 "peer_address": { 00:13:13.400 "trtype": "TCP", 00:13:13.400 "adrfam": "IPv4", 00:13:13.400 "traddr": "10.0.0.1", 00:13:13.400 "trsvcid": "42960" 00:13:13.400 }, 00:13:13.400 "auth": { 00:13:13.400 "state": "completed", 00:13:13.400 "digest": "sha384", 00:13:13.400 "dhgroup": "ffdhe6144" 00:13:13.400 } 00:13:13.400 } 00:13:13.400 ]' 00:13:13.400 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.400 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:13.400 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.400 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:13.400 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.400 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.400 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.400 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.657 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:13:13.657 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:13:14.591 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.591 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:14.591 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.591 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.591 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.591 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:14.591 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.591 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:14.591 09:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:14.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:13:14.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:14.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:14.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:14.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.849 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.414 00:13:15.414 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.414 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.414 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.672 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.672 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.672 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.672 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.672 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.673 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.673 { 00:13:15.673 "cntlid": 89, 00:13:15.673 "qid": 0, 00:13:15.673 "state": "enabled", 00:13:15.673 "thread": "nvmf_tgt_poll_group_000", 00:13:15.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:15.673 "listen_address": { 00:13:15.673 "trtype": "TCP", 00:13:15.673 "adrfam": "IPv4", 00:13:15.673 "traddr": "10.0.0.3", 00:13:15.673 "trsvcid": "4420" 00:13:15.673 }, 00:13:15.673 "peer_address": { 00:13:15.673 "trtype": "TCP", 00:13:15.673 "adrfam": "IPv4", 00:13:15.673 "traddr": "10.0.0.1", 00:13:15.673 "trsvcid": "42984" 00:13:15.673 }, 00:13:15.673 "auth": { 00:13:15.673 "state": "completed", 00:13:15.673 "digest": "sha384", 00:13:15.673 "dhgroup": "ffdhe8192" 00:13:15.673 } 00:13:15.673 } 00:13:15.673 ]' 00:13:15.673 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.673 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:15.673 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.931 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:15.931 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.931 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.931 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.931 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.189 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:13:16.189 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:13:16.757 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.757 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:16.757 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.757 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.757 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.757 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.757 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:16.757 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:17.326 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:13:17.326 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.326 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:17.326 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:17.326 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:17.326 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.326 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.326 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.326 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.326 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.326 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.326 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.326 09:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.958 00:13:17.958 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.958 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.959 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.218 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.218 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.218 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.218 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.218 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.218 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.218 { 00:13:18.218 "cntlid": 91, 00:13:18.218 "qid": 0, 00:13:18.218 "state": "enabled", 00:13:18.218 "thread": "nvmf_tgt_poll_group_000", 00:13:18.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:18.218 "listen_address": { 00:13:18.218 "trtype": "TCP", 00:13:18.218 "adrfam": "IPv4", 00:13:18.218 "traddr": "10.0.0.3", 00:13:18.218 "trsvcid": "4420" 00:13:18.218 }, 00:13:18.218 "peer_address": { 00:13:18.218 "trtype": "TCP", 00:13:18.218 "adrfam": "IPv4", 00:13:18.218 "traddr": "10.0.0.1", 00:13:18.218 "trsvcid": "42814" 00:13:18.218 }, 00:13:18.218 "auth": { 00:13:18.218 "state": "completed", 00:13:18.218 "digest": "sha384", 00:13:18.218 "dhgroup": "ffdhe8192" 00:13:18.218 } 00:13:18.218 } 00:13:18.218 ]' 00:13:18.218 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.218 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:18.218 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.218 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:18.218 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.218 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.218 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.218 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.786 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:13:18.786 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:13:19.354 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.354 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:19.354 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.354 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.354 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.354 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.355 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:19.355 09:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:19.614 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:13:19.614 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.614 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:19.614 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:19.614 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:19.614 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.614 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.614 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.614 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.614 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.614 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.614 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.614 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.180 00:13:20.180 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.180 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.180 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.439 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.439 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.439 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.439 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.439 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.439 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.439 { 00:13:20.439 "cntlid": 93, 00:13:20.439 "qid": 0, 00:13:20.439 "state": "enabled", 00:13:20.439 "thread": "nvmf_tgt_poll_group_000", 00:13:20.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:20.439 "listen_address": { 00:13:20.439 "trtype": "TCP", 00:13:20.439 "adrfam": "IPv4", 00:13:20.439 "traddr": "10.0.0.3", 00:13:20.439 "trsvcid": "4420" 00:13:20.439 }, 00:13:20.439 "peer_address": { 00:13:20.439 "trtype": "TCP", 00:13:20.439 "adrfam": "IPv4", 00:13:20.439 "traddr": "10.0.0.1", 00:13:20.439 "trsvcid": "42840" 00:13:20.439 }, 00:13:20.439 "auth": { 00:13:20.439 "state": "completed", 00:13:20.439 "digest": "sha384", 00:13:20.439 "dhgroup": "ffdhe8192" 00:13:20.439 } 00:13:20.439 } 00:13:20.439 ]' 00:13:20.439 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.439 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:20.439 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.698 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:20.698 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.698 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.698 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.698 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.958 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:13:20.958 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:13:21.538 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.538 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:21.538 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.538 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.538 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.538 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.538 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:21.538 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:22.104 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:13:22.104 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.104 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:22.104 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:22.104 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:22.104 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.104 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:13:22.104 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.104 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.104 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.104 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:22.104 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.104 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.671 00:13:22.671 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.671 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.671 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.929 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.929 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.929 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.929 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.929 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.929 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.929 { 00:13:22.929 "cntlid": 95, 00:13:22.929 "qid": 0, 00:13:22.929 "state": "enabled", 00:13:22.929 "thread": "nvmf_tgt_poll_group_000", 00:13:22.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:22.929 "listen_address": { 00:13:22.929 "trtype": "TCP", 00:13:22.929 "adrfam": "IPv4", 00:13:22.929 "traddr": "10.0.0.3", 00:13:22.929 "trsvcid": "4420" 00:13:22.929 }, 00:13:22.929 "peer_address": { 00:13:22.929 "trtype": "TCP", 00:13:22.929 "adrfam": "IPv4", 00:13:22.929 "traddr": "10.0.0.1", 00:13:22.929 "trsvcid": "42872" 00:13:22.929 }, 00:13:22.929 "auth": { 00:13:22.929 "state": "completed", 00:13:22.929 "digest": "sha384", 00:13:22.929 "dhgroup": "ffdhe8192" 00:13:22.929 } 00:13:22.929 } 00:13:22.929 ]' 00:13:22.929 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.929 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:22.929 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.188 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:23.188 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.188 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.188 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.188 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.446 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:13:23.446 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:13:24.012 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.012 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:24.012 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.012 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.012 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.012 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:24.012 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:24.012 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.012 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:24.012 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:24.580 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:13:24.580 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.580 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:24.580 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:24.580 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:24.580 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.580 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.580 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.580 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.580 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.580 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.580 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.580 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.839 00:13:24.839 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.839 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.839 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.099 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.099 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.099 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.099 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.099 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.099 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.099 { 00:13:25.099 "cntlid": 97, 00:13:25.099 "qid": 0, 00:13:25.099 "state": "enabled", 00:13:25.099 "thread": "nvmf_tgt_poll_group_000", 00:13:25.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:25.099 "listen_address": { 00:13:25.099 "trtype": "TCP", 00:13:25.099 "adrfam": "IPv4", 00:13:25.099 "traddr": "10.0.0.3", 00:13:25.099 "trsvcid": "4420" 00:13:25.099 }, 00:13:25.099 "peer_address": { 00:13:25.099 "trtype": "TCP", 00:13:25.099 "adrfam": "IPv4", 00:13:25.099 "traddr": "10.0.0.1", 00:13:25.099 "trsvcid": "42896" 00:13:25.099 }, 00:13:25.099 "auth": { 00:13:25.099 "state": "completed", 00:13:25.099 "digest": "sha512", 00:13:25.099 "dhgroup": "null" 00:13:25.099 } 00:13:25.099 } 00:13:25.099 ]' 00:13:25.099 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.099 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.099 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.099 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:25.099 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.358 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.358 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.358 09:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.621 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:13:25.621 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:13:26.190 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.190 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:26.190 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.190 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.448 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.448 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.448 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:26.448 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:26.706 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:13:26.706 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.706 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:26.706 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:26.706 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:26.706 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.706 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.706 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.706 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.706 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.706 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.707 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.707 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.965 00:13:26.965 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.965 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.965 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.224 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.224 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.224 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.224 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.224 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.224 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.224 { 00:13:27.224 "cntlid": 99, 00:13:27.224 "qid": 0, 00:13:27.224 "state": "enabled", 00:13:27.224 "thread": "nvmf_tgt_poll_group_000", 00:13:27.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:27.224 "listen_address": { 00:13:27.224 "trtype": "TCP", 00:13:27.224 "adrfam": "IPv4", 00:13:27.224 "traddr": "10.0.0.3", 00:13:27.224 "trsvcid": "4420" 00:13:27.224 }, 00:13:27.224 "peer_address": { 00:13:27.224 "trtype": "TCP", 00:13:27.224 "adrfam": "IPv4", 00:13:27.224 "traddr": "10.0.0.1", 00:13:27.224 "trsvcid": "46378" 00:13:27.224 }, 00:13:27.224 "auth": { 00:13:27.224 "state": "completed", 00:13:27.224 "digest": "sha512", 00:13:27.224 "dhgroup": "null" 00:13:27.224 } 00:13:27.224 } 00:13:27.224 ]' 00:13:27.224 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.224 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:27.224 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.483 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:27.483 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.483 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.483 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.483 09:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.742 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:13:27.742 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:13:28.309 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.568 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:28.568 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.568 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.568 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.568 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.568 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:28.568 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:28.827 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:13:28.827 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.827 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:28.827 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:28.827 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:28.827 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.827 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.827 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.827 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.827 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.828 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.828 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.828 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.086 00:13:29.086 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.086 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.086 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.344 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.344 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.344 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.344 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.344 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.344 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.344 { 00:13:29.344 "cntlid": 101, 00:13:29.344 "qid": 0, 00:13:29.344 "state": "enabled", 00:13:29.344 "thread": "nvmf_tgt_poll_group_000", 00:13:29.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:29.344 "listen_address": { 00:13:29.344 "trtype": "TCP", 00:13:29.344 "adrfam": "IPv4", 00:13:29.344 "traddr": "10.0.0.3", 00:13:29.344 "trsvcid": "4420" 00:13:29.344 }, 00:13:29.344 "peer_address": { 00:13:29.344 "trtype": "TCP", 00:13:29.344 "adrfam": "IPv4", 00:13:29.344 "traddr": "10.0.0.1", 00:13:29.344 "trsvcid": "46412" 00:13:29.344 }, 00:13:29.344 "auth": { 00:13:29.344 "state": "completed", 00:13:29.344 "digest": "sha512", 00:13:29.344 "dhgroup": "null" 00:13:29.344 } 00:13:29.344 } 00:13:29.344 ]' 00:13:29.344 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.612 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.612 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.612 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:29.612 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.612 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.612 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.612 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.871 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:13:29.871 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:13:30.436 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.436 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:30.436 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.436 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.694 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.694 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.694 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:30.694 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:30.952 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:13:30.952 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.952 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:30.952 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:30.952 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:30.952 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.952 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:13:30.953 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.953 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.953 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.953 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:30.953 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:30.953 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.211 00:13:31.211 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.211 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.211 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.469 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.469 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.469 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.469 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.469 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.469 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.469 { 00:13:31.469 "cntlid": 103, 00:13:31.469 "qid": 0, 00:13:31.469 "state": "enabled", 00:13:31.469 "thread": "nvmf_tgt_poll_group_000", 00:13:31.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:31.469 "listen_address": { 00:13:31.469 "trtype": "TCP", 00:13:31.469 "adrfam": "IPv4", 00:13:31.469 "traddr": "10.0.0.3", 00:13:31.469 "trsvcid": "4420" 00:13:31.469 }, 00:13:31.469 "peer_address": { 00:13:31.469 "trtype": "TCP", 00:13:31.469 "adrfam": "IPv4", 00:13:31.469 "traddr": "10.0.0.1", 00:13:31.469 "trsvcid": "46428" 00:13:31.469 }, 00:13:31.469 "auth": { 00:13:31.469 "state": "completed", 00:13:31.469 "digest": "sha512", 00:13:31.469 "dhgroup": "null" 00:13:31.469 } 00:13:31.469 } 00:13:31.469 ]' 00:13:31.469 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.469 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.469 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.728 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:31.728 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.728 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.728 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.728 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.986 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:13:31.986 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.922 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.533 00:13:33.533 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.533 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.533 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.533 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.533 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.533 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.533 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.793 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.793 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.793 { 00:13:33.793 "cntlid": 105, 00:13:33.793 "qid": 0, 00:13:33.793 "state": "enabled", 00:13:33.793 "thread": "nvmf_tgt_poll_group_000", 00:13:33.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:33.793 "listen_address": { 00:13:33.793 "trtype": "TCP", 00:13:33.793 "adrfam": "IPv4", 00:13:33.793 "traddr": "10.0.0.3", 00:13:33.793 "trsvcid": "4420" 00:13:33.793 }, 00:13:33.793 "peer_address": { 00:13:33.793 "trtype": "TCP", 00:13:33.793 "adrfam": "IPv4", 00:13:33.793 "traddr": "10.0.0.1", 00:13:33.793 "trsvcid": "46458" 00:13:33.793 }, 00:13:33.793 "auth": { 00:13:33.793 "state": "completed", 00:13:33.793 "digest": "sha512", 00:13:33.793 "dhgroup": "ffdhe2048" 00:13:33.793 } 00:13:33.793 } 00:13:33.793 ]' 00:13:33.793 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.793 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.793 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.793 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:33.793 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.793 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.793 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.793 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.051 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:13:34.051 09:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.985 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.551 00:13:35.551 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.551 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.552 09:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.810 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.810 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.810 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.810 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.810 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.810 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.810 { 00:13:35.810 "cntlid": 107, 00:13:35.810 "qid": 0, 00:13:35.810 "state": "enabled", 00:13:35.810 "thread": "nvmf_tgt_poll_group_000", 00:13:35.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:35.810 "listen_address": { 00:13:35.810 "trtype": "TCP", 00:13:35.810 "adrfam": "IPv4", 00:13:35.810 "traddr": "10.0.0.3", 00:13:35.810 "trsvcid": "4420" 00:13:35.810 }, 00:13:35.810 "peer_address": { 00:13:35.810 "trtype": "TCP", 00:13:35.810 "adrfam": "IPv4", 00:13:35.810 "traddr": "10.0.0.1", 00:13:35.810 "trsvcid": "46494" 00:13:35.810 }, 00:13:35.810 "auth": { 00:13:35.810 "state": "completed", 00:13:35.810 "digest": "sha512", 00:13:35.810 "dhgroup": "ffdhe2048" 00:13:35.810 } 00:13:35.810 } 00:13:35.810 ]' 00:13:35.810 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.810 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.810 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.810 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:35.810 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.810 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.810 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.810 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.377 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:13:36.377 09:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:13:36.943 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.943 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:36.943 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.943 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.943 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.943 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.943 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:36.943 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:37.201 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:37.201 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.201 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:37.201 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:37.201 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:37.201 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.201 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.201 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.201 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.201 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.201 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.201 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.201 09:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.496 00:13:37.496 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.496 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.496 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.063 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.063 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.063 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.063 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.063 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.063 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.063 { 00:13:38.063 "cntlid": 109, 00:13:38.063 "qid": 0, 00:13:38.063 "state": "enabled", 00:13:38.063 "thread": "nvmf_tgt_poll_group_000", 00:13:38.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:38.063 "listen_address": { 00:13:38.063 "trtype": "TCP", 00:13:38.063 "adrfam": "IPv4", 00:13:38.063 "traddr": "10.0.0.3", 00:13:38.063 "trsvcid": "4420" 00:13:38.063 }, 00:13:38.063 "peer_address": { 00:13:38.063 "trtype": "TCP", 00:13:38.063 "adrfam": "IPv4", 00:13:38.063 "traddr": "10.0.0.1", 00:13:38.063 "trsvcid": "60406" 00:13:38.063 }, 00:13:38.063 "auth": { 00:13:38.063 "state": "completed", 00:13:38.063 "digest": "sha512", 00:13:38.063 "dhgroup": "ffdhe2048" 00:13:38.063 } 00:13:38.063 } 00:13:38.063 ]' 00:13:38.063 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.063 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:38.063 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.063 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:38.063 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.063 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.063 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.063 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.321 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:13:38.321 09:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:13:39.256 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.256 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:39.256 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.256 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.256 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.256 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.256 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:39.256 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:39.515 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:39.515 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.515 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:39.515 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:39.515 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:39.515 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.515 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:13:39.515 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.515 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.515 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.515 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:39.515 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:39.515 09:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:39.773 00:13:39.773 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.774 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.774 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.032 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.032 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.032 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.032 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.032 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.032 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.032 { 00:13:40.032 "cntlid": 111, 00:13:40.032 "qid": 0, 00:13:40.032 "state": "enabled", 00:13:40.032 "thread": "nvmf_tgt_poll_group_000", 00:13:40.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:40.032 "listen_address": { 00:13:40.032 "trtype": "TCP", 00:13:40.032 "adrfam": "IPv4", 00:13:40.032 "traddr": "10.0.0.3", 00:13:40.032 "trsvcid": "4420" 00:13:40.032 }, 00:13:40.032 "peer_address": { 00:13:40.032 "trtype": "TCP", 00:13:40.032 "adrfam": "IPv4", 00:13:40.032 "traddr": "10.0.0.1", 00:13:40.032 "trsvcid": "60436" 00:13:40.032 }, 00:13:40.032 "auth": { 00:13:40.032 "state": "completed", 00:13:40.032 "digest": "sha512", 00:13:40.032 "dhgroup": "ffdhe2048" 00:13:40.032 } 00:13:40.032 } 00:13:40.032 ]' 00:13:40.032 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.032 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:40.032 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.032 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:40.032 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.290 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.290 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.290 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.549 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:13:40.549 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:13:41.116 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.116 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:41.116 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.116 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.116 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.116 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:41.116 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.116 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:41.116 09:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:41.682 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:41.682 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.682 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:41.682 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:41.682 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:41.682 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.682 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.682 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.682 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.682 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.682 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.682 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.682 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.940 00:13:41.940 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.940 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.940 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.199 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.199 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.199 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.199 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.199 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.199 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.199 { 00:13:42.199 "cntlid": 113, 00:13:42.199 "qid": 0, 00:13:42.199 "state": "enabled", 00:13:42.199 "thread": "nvmf_tgt_poll_group_000", 00:13:42.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:42.199 "listen_address": { 00:13:42.199 "trtype": "TCP", 00:13:42.199 "adrfam": "IPv4", 00:13:42.199 "traddr": "10.0.0.3", 00:13:42.199 "trsvcid": "4420" 00:13:42.199 }, 00:13:42.199 "peer_address": { 00:13:42.199 "trtype": "TCP", 00:13:42.199 "adrfam": "IPv4", 00:13:42.199 "traddr": "10.0.0.1", 00:13:42.199 "trsvcid": "60470" 00:13:42.199 }, 00:13:42.199 "auth": { 00:13:42.199 "state": "completed", 00:13:42.199 "digest": "sha512", 00:13:42.199 "dhgroup": "ffdhe3072" 00:13:42.199 } 00:13:42.199 } 00:13:42.199 ]' 00:13:42.199 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.199 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:42.199 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.531 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:42.531 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.531 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.531 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.531 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.789 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:13:42.789 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:13:43.355 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.355 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:43.355 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.355 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.355 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.355 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.355 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:43.355 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:43.613 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:43.613 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.613 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:43.613 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:43.613 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:43.613 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.613 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.613 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.613 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.613 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.613 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.613 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.613 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.871 00:13:43.871 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.871 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.871 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.129 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.129 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.129 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.129 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.386 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.386 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.386 { 00:13:44.386 "cntlid": 115, 00:13:44.386 "qid": 0, 00:13:44.386 "state": "enabled", 00:13:44.386 "thread": "nvmf_tgt_poll_group_000", 00:13:44.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:44.386 "listen_address": { 00:13:44.386 "trtype": "TCP", 00:13:44.386 "adrfam": "IPv4", 00:13:44.386 "traddr": "10.0.0.3", 00:13:44.386 "trsvcid": "4420" 00:13:44.386 }, 00:13:44.386 "peer_address": { 00:13:44.386 "trtype": "TCP", 00:13:44.386 "adrfam": "IPv4", 00:13:44.386 "traddr": "10.0.0.1", 00:13:44.386 "trsvcid": "60494" 00:13:44.386 }, 00:13:44.386 "auth": { 00:13:44.386 "state": "completed", 00:13:44.386 "digest": "sha512", 00:13:44.386 "dhgroup": "ffdhe3072" 00:13:44.386 } 00:13:44.386 } 00:13:44.386 ]' 00:13:44.386 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.386 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:44.386 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.386 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:44.386 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.386 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.386 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.386 09:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.644 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:13:44.644 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:13:45.578 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.578 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:45.579 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.579 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.579 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.579 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.579 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:45.579 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:45.836 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:45.836 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.836 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:45.836 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:45.836 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:45.836 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.836 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.836 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.836 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.836 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.836 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.836 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.837 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.094 00:13:46.094 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.094 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.094 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.352 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.352 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.352 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.352 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.352 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.352 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.352 { 00:13:46.352 "cntlid": 117, 00:13:46.352 "qid": 0, 00:13:46.352 "state": "enabled", 00:13:46.352 "thread": "nvmf_tgt_poll_group_000", 00:13:46.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:46.352 "listen_address": { 00:13:46.352 "trtype": "TCP", 00:13:46.352 "adrfam": "IPv4", 00:13:46.352 "traddr": "10.0.0.3", 00:13:46.352 "trsvcid": "4420" 00:13:46.352 }, 00:13:46.352 "peer_address": { 00:13:46.352 "trtype": "TCP", 00:13:46.352 "adrfam": "IPv4", 00:13:46.352 "traddr": "10.0.0.1", 00:13:46.352 "trsvcid": "60550" 00:13:46.352 }, 00:13:46.352 "auth": { 00:13:46.352 "state": "completed", 00:13:46.352 "digest": "sha512", 00:13:46.352 "dhgroup": "ffdhe3072" 00:13:46.352 } 00:13:46.352 } 00:13:46.352 ]' 00:13:46.352 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.352 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:46.352 09:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.610 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:46.611 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.611 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.611 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.611 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.869 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:13:46.869 09:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:13:47.436 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.436 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:47.436 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.436 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.436 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.436 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.436 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:47.436 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:47.694 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:47.694 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.694 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:47.694 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:47.694 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:47.694 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.694 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:13:47.694 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.694 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.694 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.694 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:47.694 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:47.694 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:48.261 00:13:48.261 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.261 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.261 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.519 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.519 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.519 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.519 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.519 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.519 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.519 { 00:13:48.519 "cntlid": 119, 00:13:48.519 "qid": 0, 00:13:48.519 "state": "enabled", 00:13:48.519 "thread": "nvmf_tgt_poll_group_000", 00:13:48.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:48.519 "listen_address": { 00:13:48.519 "trtype": "TCP", 00:13:48.519 "adrfam": "IPv4", 00:13:48.519 "traddr": "10.0.0.3", 00:13:48.520 "trsvcid": "4420" 00:13:48.520 }, 00:13:48.520 "peer_address": { 00:13:48.520 "trtype": "TCP", 00:13:48.520 "adrfam": "IPv4", 00:13:48.520 "traddr": "10.0.0.1", 00:13:48.520 "trsvcid": "60566" 00:13:48.520 }, 00:13:48.520 "auth": { 00:13:48.520 "state": "completed", 00:13:48.520 "digest": "sha512", 00:13:48.520 "dhgroup": "ffdhe3072" 00:13:48.520 } 00:13:48.520 } 00:13:48.520 ]' 00:13:48.520 09:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.520 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:48.520 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.520 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:48.520 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.520 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.520 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.520 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.778 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:13:48.778 09:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:13:49.804 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.805 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.370 00:13:50.370 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.370 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.370 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.630 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.630 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.630 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.630 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.630 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.630 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.630 { 00:13:50.630 "cntlid": 121, 00:13:50.630 "qid": 0, 00:13:50.630 "state": "enabled", 00:13:50.630 "thread": "nvmf_tgt_poll_group_000", 00:13:50.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:50.630 "listen_address": { 00:13:50.630 "trtype": "TCP", 00:13:50.630 "adrfam": "IPv4", 00:13:50.630 "traddr": "10.0.0.3", 00:13:50.630 "trsvcid": "4420" 00:13:50.630 }, 00:13:50.630 "peer_address": { 00:13:50.630 "trtype": "TCP", 00:13:50.630 "adrfam": "IPv4", 00:13:50.630 "traddr": "10.0.0.1", 00:13:50.630 "trsvcid": "60590" 00:13:50.630 }, 00:13:50.630 "auth": { 00:13:50.630 "state": "completed", 00:13:50.630 "digest": "sha512", 00:13:50.630 "dhgroup": "ffdhe4096" 00:13:50.630 } 00:13:50.630 } 00:13:50.630 ]' 00:13:50.630 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.630 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:50.630 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.888 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:50.888 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.888 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.888 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.888 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.147 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:13:51.147 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:13:51.713 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.713 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:51.713 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.713 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.713 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.713 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.713 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:51.713 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:51.971 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:51.971 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.971 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:51.971 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:51.971 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:51.971 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.971 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.971 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.971 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.971 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.971 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.971 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.971 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.537 00:13:52.537 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.537 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.537 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.794 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.794 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.794 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.794 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.794 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.794 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.794 { 00:13:52.794 "cntlid": 123, 00:13:52.794 "qid": 0, 00:13:52.794 "state": "enabled", 00:13:52.794 "thread": "nvmf_tgt_poll_group_000", 00:13:52.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:52.794 "listen_address": { 00:13:52.794 "trtype": "TCP", 00:13:52.794 "adrfam": "IPv4", 00:13:52.794 "traddr": "10.0.0.3", 00:13:52.794 "trsvcid": "4420" 00:13:52.794 }, 00:13:52.794 "peer_address": { 00:13:52.794 "trtype": "TCP", 00:13:52.794 "adrfam": "IPv4", 00:13:52.794 "traddr": "10.0.0.1", 00:13:52.794 "trsvcid": "60598" 00:13:52.794 }, 00:13:52.794 "auth": { 00:13:52.794 "state": "completed", 00:13:52.794 "digest": "sha512", 00:13:52.794 "dhgroup": "ffdhe4096" 00:13:52.794 } 00:13:52.794 } 00:13:52.794 ]' 00:13:52.794 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.794 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:52.795 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.795 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:52.795 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.053 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.053 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.053 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.312 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:13:53.312 09:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:13:53.879 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.879 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:53.879 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.879 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.879 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.879 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.879 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:53.879 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:54.138 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:54.138 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.138 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:54.138 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:54.138 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:54.138 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.138 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.138 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.138 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.138 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.138 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.138 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.138 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.397 00:13:54.655 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.655 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.655 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.913 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.913 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.913 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.913 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.913 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.913 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.913 { 00:13:54.913 "cntlid": 125, 00:13:54.913 "qid": 0, 00:13:54.913 "state": "enabled", 00:13:54.913 "thread": "nvmf_tgt_poll_group_000", 00:13:54.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:54.913 "listen_address": { 00:13:54.913 "trtype": "TCP", 00:13:54.913 "adrfam": "IPv4", 00:13:54.913 "traddr": "10.0.0.3", 00:13:54.913 "trsvcid": "4420" 00:13:54.913 }, 00:13:54.913 "peer_address": { 00:13:54.913 "trtype": "TCP", 00:13:54.913 "adrfam": "IPv4", 00:13:54.913 "traddr": "10.0.0.1", 00:13:54.913 "trsvcid": "60636" 00:13:54.913 }, 00:13:54.913 "auth": { 00:13:54.913 "state": "completed", 00:13:54.913 "digest": "sha512", 00:13:54.913 "dhgroup": "ffdhe4096" 00:13:54.913 } 00:13:54.913 } 00:13:54.913 ]' 00:13:54.913 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.913 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:54.913 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.913 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:54.913 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.913 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.913 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.913 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.478 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:13:55.478 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:13:56.045 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.046 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:56.046 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.046 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.046 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.046 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.046 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:56.046 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:56.305 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:56.305 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.305 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:56.305 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:56.305 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:56.305 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.305 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:13:56.305 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.305 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.305 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.305 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:56.305 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.305 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.872 00:13:56.872 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.872 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.872 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.131 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.131 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.131 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.131 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.131 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.131 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.131 { 00:13:57.131 "cntlid": 127, 00:13:57.131 "qid": 0, 00:13:57.131 "state": "enabled", 00:13:57.131 "thread": "nvmf_tgt_poll_group_000", 00:13:57.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:57.131 "listen_address": { 00:13:57.131 "trtype": "TCP", 00:13:57.131 "adrfam": "IPv4", 00:13:57.131 "traddr": "10.0.0.3", 00:13:57.131 "trsvcid": "4420" 00:13:57.131 }, 00:13:57.131 "peer_address": { 00:13:57.131 "trtype": "TCP", 00:13:57.131 "adrfam": "IPv4", 00:13:57.131 "traddr": "10.0.0.1", 00:13:57.131 "trsvcid": "50004" 00:13:57.131 }, 00:13:57.131 "auth": { 00:13:57.131 "state": "completed", 00:13:57.131 "digest": "sha512", 00:13:57.131 "dhgroup": "ffdhe4096" 00:13:57.131 } 00:13:57.131 } 00:13:57.131 ]' 00:13:57.131 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.131 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:57.131 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.132 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:57.132 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.391 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.391 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.391 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.650 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:13:57.650 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:13:58.216 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.216 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:13:58.216 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.216 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.216 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.216 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:58.216 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.216 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:58.216 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:58.536 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:58.536 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.536 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:58.536 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:58.536 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:58.536 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.536 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.536 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.536 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.536 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.536 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.536 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.536 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.103 00:13:59.103 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.103 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.103 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.362 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.362 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.362 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.362 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.362 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.362 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.362 { 00:13:59.362 "cntlid": 129, 00:13:59.362 "qid": 0, 00:13:59.362 "state": "enabled", 00:13:59.362 "thread": "nvmf_tgt_poll_group_000", 00:13:59.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:13:59.362 "listen_address": { 00:13:59.362 "trtype": "TCP", 00:13:59.362 "adrfam": "IPv4", 00:13:59.362 "traddr": "10.0.0.3", 00:13:59.362 "trsvcid": "4420" 00:13:59.362 }, 00:13:59.362 "peer_address": { 00:13:59.362 "trtype": "TCP", 00:13:59.362 "adrfam": "IPv4", 00:13:59.362 "traddr": "10.0.0.1", 00:13:59.362 "trsvcid": "50034" 00:13:59.362 }, 00:13:59.362 "auth": { 00:13:59.362 "state": "completed", 00:13:59.362 "digest": "sha512", 00:13:59.362 "dhgroup": "ffdhe6144" 00:13:59.362 } 00:13:59.362 } 00:13:59.362 ]' 00:13:59.362 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.362 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:59.362 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.362 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:59.362 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.621 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.621 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.621 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.879 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:13:59.879 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:14:00.451 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.451 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:00.451 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.451 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.451 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.451 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.451 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:00.451 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:00.709 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:00.709 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.709 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:00.709 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:00.709 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:00.709 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.709 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.709 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.709 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.709 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.709 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.709 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.709 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.275 00:14:01.275 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.275 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.275 09:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.532 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.533 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.533 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.533 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.533 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.533 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.533 { 00:14:01.533 "cntlid": 131, 00:14:01.533 "qid": 0, 00:14:01.533 "state": "enabled", 00:14:01.533 "thread": "nvmf_tgt_poll_group_000", 00:14:01.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:01.533 "listen_address": { 00:14:01.533 "trtype": "TCP", 00:14:01.533 "adrfam": "IPv4", 00:14:01.533 "traddr": "10.0.0.3", 00:14:01.533 "trsvcid": "4420" 00:14:01.533 }, 00:14:01.533 "peer_address": { 00:14:01.533 "trtype": "TCP", 00:14:01.533 "adrfam": "IPv4", 00:14:01.533 "traddr": "10.0.0.1", 00:14:01.533 "trsvcid": "50066" 00:14:01.533 }, 00:14:01.533 "auth": { 00:14:01.533 "state": "completed", 00:14:01.533 "digest": "sha512", 00:14:01.533 "dhgroup": "ffdhe6144" 00:14:01.533 } 00:14:01.533 } 00:14:01.533 ]' 00:14:01.533 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.533 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.533 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.790 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:01.790 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.790 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.790 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.790 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.048 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:14:02.048 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:14:02.614 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.614 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:02.614 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.614 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.614 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.614 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.614 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:02.614 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:03.181 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:03.181 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.181 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:03.181 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:03.181 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:03.181 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.181 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.181 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.181 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.181 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.181 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.181 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.181 09:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.440 00:14:03.440 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.440 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.440 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.008 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.008 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.008 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.008 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.008 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.008 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.008 { 00:14:04.008 "cntlid": 133, 00:14:04.008 "qid": 0, 00:14:04.008 "state": "enabled", 00:14:04.008 "thread": "nvmf_tgt_poll_group_000", 00:14:04.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:04.008 "listen_address": { 00:14:04.008 "trtype": "TCP", 00:14:04.008 "adrfam": "IPv4", 00:14:04.008 "traddr": "10.0.0.3", 00:14:04.008 "trsvcid": "4420" 00:14:04.008 }, 00:14:04.008 "peer_address": { 00:14:04.008 "trtype": "TCP", 00:14:04.008 "adrfam": "IPv4", 00:14:04.008 "traddr": "10.0.0.1", 00:14:04.008 "trsvcid": "50088" 00:14:04.008 }, 00:14:04.008 "auth": { 00:14:04.008 "state": "completed", 00:14:04.008 "digest": "sha512", 00:14:04.008 "dhgroup": "ffdhe6144" 00:14:04.008 } 00:14:04.008 } 00:14:04.008 ]' 00:14:04.008 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.008 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:04.008 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.008 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:04.008 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.008 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.008 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.008 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.267 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:14:04.267 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:14:05.202 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.202 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:05.202 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.202 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.202 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.202 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.202 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:05.202 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:05.461 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:05.461 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.461 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:05.461 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:05.461 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:05.461 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.461 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:14:05.461 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.461 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.461 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.461 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:05.461 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:05.461 09:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:06.028 00:14:06.028 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.028 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.028 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.287 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.287 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.287 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.288 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.288 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.288 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.288 { 00:14:06.288 "cntlid": 135, 00:14:06.288 "qid": 0, 00:14:06.288 "state": "enabled", 00:14:06.288 "thread": "nvmf_tgt_poll_group_000", 00:14:06.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:06.288 "listen_address": { 00:14:06.288 "trtype": "TCP", 00:14:06.288 "adrfam": "IPv4", 00:14:06.288 "traddr": "10.0.0.3", 00:14:06.288 "trsvcid": "4420" 00:14:06.288 }, 00:14:06.288 "peer_address": { 00:14:06.288 "trtype": "TCP", 00:14:06.288 "adrfam": "IPv4", 00:14:06.288 "traddr": "10.0.0.1", 00:14:06.288 "trsvcid": "50116" 00:14:06.288 }, 00:14:06.288 "auth": { 00:14:06.288 "state": "completed", 00:14:06.288 "digest": "sha512", 00:14:06.288 "dhgroup": "ffdhe6144" 00:14:06.288 } 00:14:06.288 } 00:14:06.288 ]' 00:14:06.288 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.288 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:06.288 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.288 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:06.288 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.288 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.288 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.288 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.855 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:14:06.855 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:14:07.422 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.422 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:07.422 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.422 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.422 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.422 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:07.422 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.422 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:07.422 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:07.737 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:07.737 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.737 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:07.737 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:07.737 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:07.737 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.737 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.737 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.737 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.737 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.737 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.737 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.737 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.304 00:14:08.304 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.304 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.304 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.562 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.562 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.562 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.562 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.562 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.562 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.562 { 00:14:08.562 "cntlid": 137, 00:14:08.562 "qid": 0, 00:14:08.562 "state": "enabled", 00:14:08.562 "thread": "nvmf_tgt_poll_group_000", 00:14:08.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:08.562 "listen_address": { 00:14:08.562 "trtype": "TCP", 00:14:08.562 "adrfam": "IPv4", 00:14:08.562 "traddr": "10.0.0.3", 00:14:08.562 "trsvcid": "4420" 00:14:08.562 }, 00:14:08.562 "peer_address": { 00:14:08.562 "trtype": "TCP", 00:14:08.562 "adrfam": "IPv4", 00:14:08.562 "traddr": "10.0.0.1", 00:14:08.562 "trsvcid": "60716" 00:14:08.562 }, 00:14:08.562 "auth": { 00:14:08.562 "state": "completed", 00:14:08.562 "digest": "sha512", 00:14:08.562 "dhgroup": "ffdhe8192" 00:14:08.562 } 00:14:08.562 } 00:14:08.562 ]' 00:14:08.562 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.822 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:08.822 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.822 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:08.822 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.822 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.822 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.822 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.081 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:14:09.081 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.016 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.951 00:14:10.951 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.951 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.951 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.210 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.210 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.210 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.210 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.210 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.210 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.210 { 00:14:11.210 "cntlid": 139, 00:14:11.210 "qid": 0, 00:14:11.210 "state": "enabled", 00:14:11.210 "thread": "nvmf_tgt_poll_group_000", 00:14:11.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:11.210 "listen_address": { 00:14:11.210 "trtype": "TCP", 00:14:11.210 "adrfam": "IPv4", 00:14:11.210 "traddr": "10.0.0.3", 00:14:11.210 "trsvcid": "4420" 00:14:11.210 }, 00:14:11.210 "peer_address": { 00:14:11.210 "trtype": "TCP", 00:14:11.210 "adrfam": "IPv4", 00:14:11.210 "traddr": "10.0.0.1", 00:14:11.210 "trsvcid": "60744" 00:14:11.210 }, 00:14:11.210 "auth": { 00:14:11.210 "state": "completed", 00:14:11.210 "digest": "sha512", 00:14:11.210 "dhgroup": "ffdhe8192" 00:14:11.210 } 00:14:11.210 } 00:14:11.210 ]' 00:14:11.210 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.210 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.210 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.210 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:11.210 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.210 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.210 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.210 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.468 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:14:11.468 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: --dhchap-ctrl-secret DHHC-1:02:Y2Y0ZjA0MTgzZjFiYWY3NjlhZThiMDNhNDFkYjk3NTM5ZjM0YTM2ZTMyOGQzMDU2TyqA9w==: 00:14:12.129 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.387 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:12.387 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.387 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.387 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.387 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.387 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:12.387 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:12.646 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:12.646 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.646 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:12.646 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:12.646 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:12.646 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.646 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.646 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.646 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.646 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.646 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.646 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.646 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.211 00:14:13.211 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.211 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.211 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.470 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.470 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.470 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.470 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.470 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.470 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.470 { 00:14:13.470 "cntlid": 141, 00:14:13.470 "qid": 0, 00:14:13.470 "state": "enabled", 00:14:13.470 "thread": "nvmf_tgt_poll_group_000", 00:14:13.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:13.470 "listen_address": { 00:14:13.470 "trtype": "TCP", 00:14:13.470 "adrfam": "IPv4", 00:14:13.470 "traddr": "10.0.0.3", 00:14:13.470 "trsvcid": "4420" 00:14:13.470 }, 00:14:13.470 "peer_address": { 00:14:13.470 "trtype": "TCP", 00:14:13.470 "adrfam": "IPv4", 00:14:13.470 "traddr": "10.0.0.1", 00:14:13.470 "trsvcid": "60758" 00:14:13.470 }, 00:14:13.470 "auth": { 00:14:13.470 "state": "completed", 00:14:13.470 "digest": "sha512", 00:14:13.470 "dhgroup": "ffdhe8192" 00:14:13.470 } 00:14:13.470 } 00:14:13.470 ]' 00:14:13.470 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.470 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:13.470 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.470 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:13.470 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.728 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.728 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.728 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.986 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:14:13.986 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:01:ZGEyODg5MDM3MzUxN2Q2NzQwZTY5ZTJmYjc4NTk0ZjTW3ane: 00:14:14.553 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.553 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:14.553 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.553 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.553 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.553 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.553 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:14.553 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:15.122 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:15.122 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.122 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:15.122 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:15.122 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:15.122 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.122 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:14:15.122 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.122 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.122 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.122 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:15.122 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.122 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.696 00:14:15.696 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.696 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.696 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.982 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.982 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.982 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.982 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.982 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.982 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.982 { 00:14:15.982 "cntlid": 143, 00:14:15.982 "qid": 0, 00:14:15.982 "state": "enabled", 00:14:15.982 "thread": "nvmf_tgt_poll_group_000", 00:14:15.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:15.982 "listen_address": { 00:14:15.982 "trtype": "TCP", 00:14:15.982 "adrfam": "IPv4", 00:14:15.982 "traddr": "10.0.0.3", 00:14:15.982 "trsvcid": "4420" 00:14:15.982 }, 00:14:15.982 "peer_address": { 00:14:15.982 "trtype": "TCP", 00:14:15.982 "adrfam": "IPv4", 00:14:15.982 "traddr": "10.0.0.1", 00:14:15.982 "trsvcid": "60784" 00:14:15.982 }, 00:14:15.982 "auth": { 00:14:15.982 "state": "completed", 00:14:15.982 "digest": "sha512", 00:14:15.982 "dhgroup": "ffdhe8192" 00:14:15.982 } 00:14:15.982 } 00:14:15.982 ]' 00:14:15.982 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.982 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:15.982 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.982 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:15.982 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.982 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.982 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.982 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.551 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:14:16.551 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:14:17.120 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.120 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:17.120 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.120 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.120 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.120 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:17.120 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:14:17.120 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:17.120 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:17.120 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:17.120 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:17.379 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:14:17.379 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.379 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:17.379 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:17.379 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:17.379 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.379 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.379 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.379 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.379 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.379 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.379 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.379 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.314 00:14:18.314 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.314 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.314 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.314 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.314 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.314 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.314 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.314 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.314 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.314 { 00:14:18.314 "cntlid": 145, 00:14:18.314 "qid": 0, 00:14:18.314 "state": "enabled", 00:14:18.314 "thread": "nvmf_tgt_poll_group_000", 00:14:18.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:18.314 "listen_address": { 00:14:18.314 "trtype": "TCP", 00:14:18.314 "adrfam": "IPv4", 00:14:18.314 "traddr": "10.0.0.3", 00:14:18.314 "trsvcid": "4420" 00:14:18.314 }, 00:14:18.314 "peer_address": { 00:14:18.314 "trtype": "TCP", 00:14:18.314 "adrfam": "IPv4", 00:14:18.314 "traddr": "10.0.0.1", 00:14:18.314 "trsvcid": "46802" 00:14:18.314 }, 00:14:18.314 "auth": { 00:14:18.314 "state": "completed", 00:14:18.314 "digest": "sha512", 00:14:18.314 "dhgroup": "ffdhe8192" 00:14:18.314 } 00:14:18.314 } 00:14:18.314 ]' 00:14:18.314 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.572 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:18.572 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.572 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:18.572 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.572 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.572 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.572 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.830 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:14:18.830 09:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:00:ZTFkMDQyZTZhNmViZjJlMmYwZDgwZmRjZTgzM2RlZWI0MTUzZmI4OTFkYzllYTg2xOE6TQ==: --dhchap-ctrl-secret DHHC-1:03:MmU4OGI1ZTM5OTBkYzc1ODg4NmIxODY3MGQ5MWM3OTQwOTU4ZTM4NGU4Y2YzMDFiZDFlZmE0MzE0NmU3OTQ3NqAJprQ=: 00:14:19.764 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.764 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:19.764 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:19.765 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:20.331 request: 00:14:20.331 { 00:14:20.331 "name": "nvme0", 00:14:20.331 "trtype": "tcp", 00:14:20.331 "traddr": "10.0.0.3", 00:14:20.331 "adrfam": "ipv4", 00:14:20.331 "trsvcid": "4420", 00:14:20.331 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:20.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:20.331 "prchk_reftag": false, 00:14:20.331 "prchk_guard": false, 00:14:20.331 "hdgst": false, 00:14:20.331 "ddgst": false, 00:14:20.331 "dhchap_key": "key2", 00:14:20.331 "allow_unrecognized_csi": false, 00:14:20.331 "method": "bdev_nvme_attach_controller", 00:14:20.331 "req_id": 1 00:14:20.331 } 00:14:20.331 Got JSON-RPC error response 00:14:20.331 response: 00:14:20.331 { 00:14:20.331 "code": -5, 00:14:20.331 "message": "Input/output error" 00:14:20.331 } 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.331 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:20.332 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:20.332 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:20.898 request: 00:14:20.898 { 00:14:20.898 "name": "nvme0", 00:14:20.898 "trtype": "tcp", 00:14:20.898 "traddr": "10.0.0.3", 00:14:20.898 "adrfam": "ipv4", 00:14:20.898 "trsvcid": "4420", 00:14:20.898 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:20.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:20.898 "prchk_reftag": false, 00:14:20.898 "prchk_guard": false, 00:14:20.898 "hdgst": false, 00:14:20.898 "ddgst": false, 00:14:20.898 "dhchap_key": "key1", 00:14:20.898 "dhchap_ctrlr_key": "ckey2", 00:14:20.898 "allow_unrecognized_csi": false, 00:14:20.898 "method": "bdev_nvme_attach_controller", 00:14:20.898 "req_id": 1 00:14:20.898 } 00:14:20.898 Got JSON-RPC error response 00:14:20.898 response: 00:14:20.898 { 00:14:20.898 "code": -5, 00:14:20.898 "message": "Input/output error" 00:14:20.898 } 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.898 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.464 request: 00:14:21.464 { 00:14:21.464 "name": "nvme0", 00:14:21.464 "trtype": "tcp", 00:14:21.464 "traddr": "10.0.0.3", 00:14:21.464 "adrfam": "ipv4", 00:14:21.464 "trsvcid": "4420", 00:14:21.464 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:21.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:21.464 "prchk_reftag": false, 00:14:21.464 "prchk_guard": false, 00:14:21.464 "hdgst": false, 00:14:21.464 "ddgst": false, 00:14:21.464 "dhchap_key": "key1", 00:14:21.464 "dhchap_ctrlr_key": "ckey1", 00:14:21.464 "allow_unrecognized_csi": false, 00:14:21.464 "method": "bdev_nvme_attach_controller", 00:14:21.464 "req_id": 1 00:14:21.464 } 00:14:21.464 Got JSON-RPC error response 00:14:21.464 response: 00:14:21.464 { 00:14:21.464 "code": -5, 00:14:21.464 "message": "Input/output error" 00:14:21.464 } 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67261 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67261 ']' 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67261 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67261 00:14:21.464 killing process with pid 67261 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67261' 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67261 00:14:21.464 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67261 00:14:21.724 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:21.724 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:21.724 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:21.724 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.724 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70399 00:14:21.724 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70399 00:14:21.724 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70399 ']' 00:14:21.724 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.724 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:21.724 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.724 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.724 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.724 09:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70399 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70399 ']' 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.109 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.369 null0 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.frB 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.xBS ]] 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xBS 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.t5J 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.MIh ]] 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MIh 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Ltk 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.EpZ ]] 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EpZ 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ElQ 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:23.369 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:23.370 09:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:24.306 nvme0n1 00:14:24.306 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.306 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.306 09:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.873 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.873 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.873 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.873 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.873 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.873 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.873 { 00:14:24.873 "cntlid": 1, 00:14:24.873 "qid": 0, 00:14:24.873 "state": "enabled", 00:14:24.873 "thread": "nvmf_tgt_poll_group_000", 00:14:24.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:24.873 "listen_address": { 00:14:24.873 "trtype": "TCP", 00:14:24.873 "adrfam": "IPv4", 00:14:24.873 "traddr": "10.0.0.3", 00:14:24.873 "trsvcid": "4420" 00:14:24.873 }, 00:14:24.873 "peer_address": { 00:14:24.873 "trtype": "TCP", 00:14:24.873 "adrfam": "IPv4", 00:14:24.873 "traddr": "10.0.0.1", 00:14:24.873 "trsvcid": "46860" 00:14:24.873 }, 00:14:24.873 "auth": { 00:14:24.873 "state": "completed", 00:14:24.873 "digest": "sha512", 00:14:24.873 "dhgroup": "ffdhe8192" 00:14:24.873 } 00:14:24.873 } 00:14:24.873 ]' 00:14:24.873 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.873 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:24.873 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.873 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:24.873 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.873 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.873 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.873 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.132 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:14:25.132 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:14:26.067 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.067 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:26.067 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.067 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.067 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.067 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key3 00:14:26.067 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.067 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.067 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.067 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:26.067 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:26.326 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:26.326 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:26.326 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:26.326 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:26.326 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.326 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:26.326 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.326 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:26.326 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.326 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.585 request: 00:14:26.585 { 00:14:26.585 "name": "nvme0", 00:14:26.585 "trtype": "tcp", 00:14:26.585 "traddr": "10.0.0.3", 00:14:26.585 "adrfam": "ipv4", 00:14:26.585 "trsvcid": "4420", 00:14:26.585 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:26.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:26.585 "prchk_reftag": false, 00:14:26.585 "prchk_guard": false, 00:14:26.585 "hdgst": false, 00:14:26.585 "ddgst": false, 00:14:26.585 "dhchap_key": "key3", 00:14:26.585 "allow_unrecognized_csi": false, 00:14:26.585 "method": "bdev_nvme_attach_controller", 00:14:26.585 "req_id": 1 00:14:26.585 } 00:14:26.585 Got JSON-RPC error response 00:14:26.585 response: 00:14:26.585 { 00:14:26.585 "code": -5, 00:14:26.585 "message": "Input/output error" 00:14:26.585 } 00:14:26.585 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:26.585 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:26.585 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:26.585 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:26.585 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:14:26.585 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:14:26.585 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:26.585 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:26.843 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:26.843 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:26.843 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:26.843 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:26.843 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.844 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:26.844 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.844 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:26.844 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.844 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.117 request: 00:14:27.117 { 00:14:27.117 "name": "nvme0", 00:14:27.117 "trtype": "tcp", 00:14:27.117 "traddr": "10.0.0.3", 00:14:27.117 "adrfam": "ipv4", 00:14:27.117 "trsvcid": "4420", 00:14:27.117 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:27.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:27.117 "prchk_reftag": false, 00:14:27.117 "prchk_guard": false, 00:14:27.117 "hdgst": false, 00:14:27.117 "ddgst": false, 00:14:27.117 "dhchap_key": "key3", 00:14:27.117 "allow_unrecognized_csi": false, 00:14:27.117 "method": "bdev_nvme_attach_controller", 00:14:27.117 "req_id": 1 00:14:27.117 } 00:14:27.117 Got JSON-RPC error response 00:14:27.117 response: 00:14:27.117 { 00:14:27.117 "code": -5, 00:14:27.117 "message": "Input/output error" 00:14:27.117 } 00:14:27.117 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:27.117 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:27.117 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:27.117 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:27.117 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:27.117 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:14:27.117 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:27.117 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:27.117 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:27.117 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:27.375 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:27.941 request: 00:14:27.941 { 00:14:27.941 "name": "nvme0", 00:14:27.941 "trtype": "tcp", 00:14:27.941 "traddr": "10.0.0.3", 00:14:27.941 "adrfam": "ipv4", 00:14:27.941 "trsvcid": "4420", 00:14:27.941 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:27.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:27.941 "prchk_reftag": false, 00:14:27.941 "prchk_guard": false, 00:14:27.941 "hdgst": false, 00:14:27.941 "ddgst": false, 00:14:27.941 "dhchap_key": "key0", 00:14:27.941 "dhchap_ctrlr_key": "key1", 00:14:27.942 "allow_unrecognized_csi": false, 00:14:27.942 "method": "bdev_nvme_attach_controller", 00:14:27.942 "req_id": 1 00:14:27.942 } 00:14:27.942 Got JSON-RPC error response 00:14:27.942 response: 00:14:27.942 { 00:14:27.942 "code": -5, 00:14:27.942 "message": "Input/output error" 00:14:27.942 } 00:14:27.942 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:27.942 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:27.942 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:27.942 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:27.942 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:14:27.942 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:27.942 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:28.200 nvme0n1 00:14:28.200 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:14:28.200 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.200 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:14:28.782 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.782 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.782 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.049 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 00:14:29.049 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.049 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.049 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.049 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:29.049 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:29.049 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:29.983 nvme0n1 00:14:29.983 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:14:29.983 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.983 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:14:30.241 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.241 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:30.241 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.241 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.241 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.241 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:14:30.241 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.241 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:14:30.499 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.499 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:14:30.499 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid 9203ba0c-8506-4f0b-a886-a7f874c4694c -l 0 --dhchap-secret DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: --dhchap-ctrl-secret DHHC-1:03:MzhlYzIxNzUwOThmN2UzOTM4ZTBkYTA1MjIwMzZiZmMwNmZiMDFiODk3MzkyZjUxOTFkZDY3OTc1M2U2NjA0NAyL5SM=: 00:14:31.064 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:14:31.064 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:14:31.064 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:14:31.064 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:14:31.064 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:14:31.064 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:14:31.064 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:14:31.064 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.064 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.630 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:14:31.630 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:31.630 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:14:31.630 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:31.630 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.630 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:31.630 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.630 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:31.630 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:31.630 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:32.197 request: 00:14:32.197 { 00:14:32.197 "name": "nvme0", 00:14:32.197 "trtype": "tcp", 00:14:32.197 "traddr": "10.0.0.3", 00:14:32.197 "adrfam": "ipv4", 00:14:32.197 "trsvcid": "4420", 00:14:32.197 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:32.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c", 00:14:32.197 "prchk_reftag": false, 00:14:32.197 "prchk_guard": false, 00:14:32.197 "hdgst": false, 00:14:32.197 "ddgst": false, 00:14:32.197 "dhchap_key": "key1", 00:14:32.197 "allow_unrecognized_csi": false, 00:14:32.197 "method": "bdev_nvme_attach_controller", 00:14:32.197 "req_id": 1 00:14:32.197 } 00:14:32.197 Got JSON-RPC error response 00:14:32.197 response: 00:14:32.197 { 00:14:32.197 "code": -5, 00:14:32.197 "message": "Input/output error" 00:14:32.197 } 00:14:32.197 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:32.197 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:32.197 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:32.197 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:32.197 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:32.197 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:32.197 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:33.201 nvme0n1 00:14:33.201 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:14:33.201 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.201 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:14:33.459 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.459 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.459 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.718 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:33.718 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.718 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.718 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.718 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:14:33.718 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:33.718 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:34.284 nvme0n1 00:14:34.284 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:14:34.284 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.284 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:14:34.542 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.542 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.542 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: '' 2s 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: ]] 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MTM1OTcwNTEwMDc2NGI3MjIzNzk5ZWJkM2I3OGRjMzGbGEP/: 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:34.801 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: 2s 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: ]] 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTI1NzhmNDY1OTc4MTIxMDVkMjg0OTQ0YzhkMzdiNzRmYjA5NWQyYmYwOWY0NDE3KVMkCw==: 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:37.332 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:39.233 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:40.165 nvme0n1 00:14:40.165 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:40.165 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.165 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.165 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.165 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:40.165 09:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:40.731 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:40.731 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.731 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:40.989 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.989 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:40.989 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.989 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.989 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.989 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:40.989 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:41.553 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:41.553 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.553 09:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:41.812 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.812 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:41.812 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.812 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.812 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.812 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:41.812 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:41.812 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:41.812 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:41.812 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:41.812 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:41.812 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:41.812 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:41.812 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:42.379 request: 00:14:42.379 { 00:14:42.379 "name": "nvme0", 00:14:42.379 "dhchap_key": "key1", 00:14:42.379 "dhchap_ctrlr_key": "key3", 00:14:42.379 "method": "bdev_nvme_set_keys", 00:14:42.379 "req_id": 1 00:14:42.379 } 00:14:42.379 Got JSON-RPC error response 00:14:42.379 response: 00:14:42.379 { 00:14:42.379 "code": -13, 00:14:42.379 "message": "Permission denied" 00:14:42.379 } 00:14:42.379 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:42.379 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:42.379 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:42.379 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:42.379 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:42.379 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:42.379 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.637 09:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:42.637 09:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:43.574 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:43.574 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.574 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:44.141 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:44.141 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:44.141 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.141 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.141 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.141 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:44.141 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:44.141 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:45.076 nvme0n1 00:14:45.076 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:45.076 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.076 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.076 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.076 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:45.076 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:45.077 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:45.077 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:45.077 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.077 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:45.077 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.077 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:45.077 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:45.644 request: 00:14:45.644 { 00:14:45.644 "name": "nvme0", 00:14:45.644 "dhchap_key": "key2", 00:14:45.644 "dhchap_ctrlr_key": "key0", 00:14:45.644 "method": "bdev_nvme_set_keys", 00:14:45.644 "req_id": 1 00:14:45.644 } 00:14:45.644 Got JSON-RPC error response 00:14:45.644 response: 00:14:45.644 { 00:14:45.644 "code": -13, 00:14:45.644 "message": "Permission denied" 00:14:45.644 } 00:14:45.644 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:45.644 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:45.644 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:45.644 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:45.644 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:45.644 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.644 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:45.902 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:45.902 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67287 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67287 ']' 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67287 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67287 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:47.276 killing process with pid 67287 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67287' 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67287 00:14:47.276 09:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67287 00:14:47.570 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:47.570 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:47.570 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:47.828 rmmod nvme_tcp 00:14:47.828 rmmod nvme_fabrics 00:14:47.828 rmmod nvme_keyring 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70399 ']' 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70399 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70399 ']' 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70399 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70399 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.828 killing process with pid 70399 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70399' 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70399 00:14:47.828 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70399 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:48.086 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:48.345 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.345 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.345 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:48.345 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.345 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.345 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.345 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:48.346 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.frB /tmp/spdk.key-sha256.t5J /tmp/spdk.key-sha384.Ltk /tmp/spdk.key-sha512.ElQ /tmp/spdk.key-sha512.xBS /tmp/spdk.key-sha384.MIh /tmp/spdk.key-sha256.EpZ '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:48.346 00:14:48.346 real 3m18.648s 00:14:48.346 user 7m58.019s 00:14:48.346 sys 0m29.976s 00:14:48.346 ************************************ 00:14:48.346 END TEST nvmf_auth_target 00:14:48.346 ************************************ 00:14:48.346 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.346 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.346 09:42:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:48.346 09:42:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:48.346 09:42:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:48.346 09:42:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.346 09:42:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.346 ************************************ 00:14:48.346 START TEST nvmf_bdevio_no_huge 00:14:48.346 ************************************ 00:14:48.346 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:48.346 * Looking for test storage... 00:14:48.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:48.606 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:48.606 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:14:48.606 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:48.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.606 --rc genhtml_branch_coverage=1 00:14:48.606 --rc genhtml_function_coverage=1 00:14:48.606 --rc genhtml_legend=1 00:14:48.606 --rc geninfo_all_blocks=1 00:14:48.606 --rc geninfo_unexecuted_blocks=1 00:14:48.606 00:14:48.606 ' 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:48.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.606 --rc genhtml_branch_coverage=1 00:14:48.606 --rc genhtml_function_coverage=1 00:14:48.606 --rc genhtml_legend=1 00:14:48.606 --rc geninfo_all_blocks=1 00:14:48.606 --rc geninfo_unexecuted_blocks=1 00:14:48.606 00:14:48.606 ' 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:48.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.606 --rc genhtml_branch_coverage=1 00:14:48.606 --rc genhtml_function_coverage=1 00:14:48.606 --rc genhtml_legend=1 00:14:48.606 --rc geninfo_all_blocks=1 00:14:48.606 --rc geninfo_unexecuted_blocks=1 00:14:48.606 00:14:48.606 ' 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:48.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.606 --rc genhtml_branch_coverage=1 00:14:48.606 --rc genhtml_function_coverage=1 00:14:48.606 --rc genhtml_legend=1 00:14:48.606 --rc geninfo_all_blocks=1 00:14:48.606 --rc geninfo_unexecuted_blocks=1 00:14:48.606 00:14:48.606 ' 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.606 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.607 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:48.607 Cannot find device "nvmf_init_br" 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:48.607 Cannot find device "nvmf_init_br2" 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:48.607 Cannot find device "nvmf_tgt_br" 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.607 Cannot find device "nvmf_tgt_br2" 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:48.607 Cannot find device "nvmf_init_br" 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:48.607 Cannot find device "nvmf_init_br2" 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:48.607 Cannot find device "nvmf_tgt_br" 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:48.607 Cannot find device "nvmf_tgt_br2" 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:48.607 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:48.866 Cannot find device "nvmf_br" 00:14:48.866 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:48.866 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:48.866 Cannot find device "nvmf_init_if" 00:14:48.866 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:48.866 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:48.866 Cannot find device "nvmf_init_if2" 00:14:48.866 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:48.866 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.866 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:48.866 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.866 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:48.866 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:48.866 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:48.866 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:48.866 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:48.867 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:49.126 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:49.126 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:14:49.126 00:14:49.126 --- 10.0.0.3 ping statistics --- 00:14:49.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.126 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:49.126 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:49.126 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:14:49.126 00:14:49.126 --- 10.0.0.4 ping statistics --- 00:14:49.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.126 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:49.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:14:49.126 00:14:49.126 --- 10.0.0.1 ping statistics --- 00:14:49.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.126 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:49.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:14:49.126 00:14:49.126 --- 10.0.0.2 ping statistics --- 00:14:49.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.126 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:49.126 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:49.127 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71059 00:14:49.127 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71059 00:14:49.127 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 71059 ']' 00:14:49.127 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.127 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:49.127 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.127 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.127 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.127 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:49.127 [2024-11-19 09:42:36.617935] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:14:49.127 [2024-11-19 09:42:36.618051] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:49.386 [2024-11-19 09:42:36.784402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.386 [2024-11-19 09:42:36.849067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.386 [2024-11-19 09:42:36.849121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.386 [2024-11-19 09:42:36.849132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.386 [2024-11-19 09:42:36.849141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.386 [2024-11-19 09:42:36.849148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.386 [2024-11-19 09:42:36.849776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:49.386 [2024-11-19 09:42:36.850537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:49.386 [2024-11-19 09:42:36.850758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.386 [2024-11-19 09:42:36.850760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:49.386 [2024-11-19 09:42:36.855516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:50.322 [2024-11-19 09:42:37.723416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:50.322 Malloc0 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:50.322 [2024-11-19 09:42:37.771791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:50.322 { 00:14:50.322 "params": { 00:14:50.322 "name": "Nvme$subsystem", 00:14:50.322 "trtype": "$TEST_TRANSPORT", 00:14:50.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:50.322 "adrfam": "ipv4", 00:14:50.322 "trsvcid": "$NVMF_PORT", 00:14:50.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:50.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:50.322 "hdgst": ${hdgst:-false}, 00:14:50.322 "ddgst": ${ddgst:-false} 00:14:50.322 }, 00:14:50.322 "method": "bdev_nvme_attach_controller" 00:14:50.322 } 00:14:50.322 EOF 00:14:50.322 )") 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:14:50.322 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:50.322 "params": { 00:14:50.322 "name": "Nvme1", 00:14:50.322 "trtype": "tcp", 00:14:50.322 "traddr": "10.0.0.3", 00:14:50.322 "adrfam": "ipv4", 00:14:50.322 "trsvcid": "4420", 00:14:50.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:50.322 "hdgst": false, 00:14:50.322 "ddgst": false 00:14:50.322 }, 00:14:50.322 "method": "bdev_nvme_attach_controller" 00:14:50.322 }' 00:14:50.322 [2024-11-19 09:42:37.832858] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:14:50.322 [2024-11-19 09:42:37.832958] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71099 ] 00:14:50.581 [2024-11-19 09:42:37.995064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:50.581 [2024-11-19 09:42:38.077860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.581 [2024-11-19 09:42:38.078001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.581 [2024-11-19 09:42:38.078006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.581 [2024-11-19 09:42:38.092943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.839 I/O targets: 00:14:50.839 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:50.839 00:14:50.839 00:14:50.839 CUnit - A unit testing framework for C - Version 2.1-3 00:14:50.839 http://cunit.sourceforge.net/ 00:14:50.839 00:14:50.839 00:14:50.839 Suite: bdevio tests on: Nvme1n1 00:14:50.839 Test: blockdev write read block ...passed 00:14:50.839 Test: blockdev write zeroes read block ...passed 00:14:50.839 Test: blockdev write zeroes read no split ...passed 00:14:50.839 Test: blockdev write zeroes read split ...passed 00:14:50.839 Test: blockdev write zeroes read split partial ...passed 00:14:50.839 Test: blockdev reset ...[2024-11-19 09:42:38.339595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:50.839 [2024-11-19 09:42:38.339715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe1310 (9): Bad file descriptor 00:14:50.839 passed 00:14:50.839 Test: blockdev write read 8 blocks ...[2024-11-19 09:42:38.356263] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:50.839 passed 00:14:50.839 Test: blockdev write read size > 128k ...passed 00:14:50.839 Test: blockdev write read invalid size ...passed 00:14:50.839 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:50.839 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:50.839 Test: blockdev write read max offset ...passed 00:14:50.839 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:50.839 Test: blockdev writev readv 8 blocks ...passed 00:14:50.839 Test: blockdev writev readv 30 x 1block ...passed 00:14:50.839 Test: blockdev writev readv block ...passed 00:14:50.839 Test: blockdev writev readv size > 128k ...passed 00:14:50.839 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:50.839 Test: blockdev comparev and writev ...[2024-11-19 09:42:38.365429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.839 [2024-11-19 09:42:38.365474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:50.839 [2024-11-19 09:42:38.365497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.839 [2024-11-19 09:42:38.365508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:50.839 [2024-11-19 09:42:38.365835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.839 [2024-11-19 09:42:38.365858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:50.839 [2024-11-19 09:42:38.365876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.839 [2024-11-19 09:42:38.365886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:50.839 [2024-11-19 09:42:38.366285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.839 [2024-11-19 09:42:38.366308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:50.839 [2024-11-19 09:42:38.366326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.839 [2024-11-19 09:42:38.366336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:50.839 [2024-11-19 09:42:38.366629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.839 [2024-11-19 09:42:38.366660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:50.839 [2024-11-19 09:42:38.366678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.839 [2024-11-19 09:42:38.366687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:50.839 passed 00:14:50.839 Test: blockdev nvme passthru rw ...passed 00:14:50.839 Test: blockdev nvme passthru vendor specific ...[2024-11-19 09:42:38.367722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:50.839 [2024-11-19 09:42:38.367750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:50.839 [2024-11-19 09:42:38.367872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:50.839 [2024-11-19 09:42:38.367889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:50.839 [2024-11-19 09:42:38.368009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:50.839 [2024-11-19 09:42:38.368030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:50.839 [2024-11-19 09:42:38.368147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:50.839 [2024-11-19 09:42:38.368167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:50.839 passed 00:14:50.839 Test: blockdev nvme admin passthru ...passed 00:14:50.839 Test: blockdev copy ...passed 00:14:50.839 00:14:50.839 Run Summary: Type Total Ran Passed Failed Inactive 00:14:50.840 suites 1 1 n/a 0 0 00:14:50.840 tests 23 23 23 0 0 00:14:50.840 asserts 152 152 152 0 n/a 00:14:50.840 00:14:50.840 Elapsed time = 0.172 seconds 00:14:51.097 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.097 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.097 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:51.355 rmmod nvme_tcp 00:14:51.355 rmmod nvme_fabrics 00:14:51.355 rmmod nvme_keyring 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71059 ']' 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71059 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 71059 ']' 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 71059 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71059 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:51.355 killing process with pid 71059 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71059' 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 71059 00:14:51.355 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 71059 00:14:51.724 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:51.724 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:51.724 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:51.724 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:51.724 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:14:51.724 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:51.724 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:14:51.724 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:51.724 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:51.724 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:51.724 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:51.724 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:51.998 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.998 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:51.998 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:51.998 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:51.998 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:51.998 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:51.998 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:51.998 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:51.998 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.999 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.999 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:51.999 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.999 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.999 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.999 ************************************ 00:14:51.999 END TEST nvmf_bdevio_no_huge 00:14:51.999 ************************************ 00:14:51.999 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:51.999 00:14:51.999 real 0m3.615s 00:14:51.999 user 0m11.084s 00:14:51.999 sys 0m1.390s 00:14:51.999 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.999 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:51.999 09:42:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:51.999 09:42:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:51.999 09:42:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.999 09:42:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:51.999 ************************************ 00:14:51.999 START TEST nvmf_tls 00:14:51.999 ************************************ 00:14:51.999 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:51.999 * Looking for test storage... 00:14:52.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:52.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.258 --rc genhtml_branch_coverage=1 00:14:52.258 --rc genhtml_function_coverage=1 00:14:52.258 --rc genhtml_legend=1 00:14:52.258 --rc geninfo_all_blocks=1 00:14:52.258 --rc geninfo_unexecuted_blocks=1 00:14:52.258 00:14:52.258 ' 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:52.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.258 --rc genhtml_branch_coverage=1 00:14:52.258 --rc genhtml_function_coverage=1 00:14:52.258 --rc genhtml_legend=1 00:14:52.258 --rc geninfo_all_blocks=1 00:14:52.258 --rc geninfo_unexecuted_blocks=1 00:14:52.258 00:14:52.258 ' 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:52.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.258 --rc genhtml_branch_coverage=1 00:14:52.258 --rc genhtml_function_coverage=1 00:14:52.258 --rc genhtml_legend=1 00:14:52.258 --rc geninfo_all_blocks=1 00:14:52.258 --rc geninfo_unexecuted_blocks=1 00:14:52.258 00:14:52.258 ' 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:52.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.258 --rc genhtml_branch_coverage=1 00:14:52.258 --rc genhtml_function_coverage=1 00:14:52.258 --rc genhtml_legend=1 00:14:52.258 --rc geninfo_all_blocks=1 00:14:52.258 --rc geninfo_unexecuted_blocks=1 00:14:52.258 00:14:52.258 ' 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.258 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.259 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:52.259 Cannot find device "nvmf_init_br" 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:52.259 Cannot find device "nvmf_init_br2" 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:52.259 Cannot find device "nvmf_tgt_br" 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.259 Cannot find device "nvmf_tgt_br2" 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:52.259 Cannot find device "nvmf_init_br" 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:52.259 Cannot find device "nvmf_init_br2" 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:52.259 Cannot find device "nvmf_tgt_br" 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:52.259 Cannot find device "nvmf_tgt_br2" 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:52.259 Cannot find device "nvmf_br" 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:52.259 Cannot find device "nvmf_init_if" 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:52.259 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:52.259 Cannot find device "nvmf_init_if2" 00:14:52.517 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:52.517 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.517 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:52.517 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.517 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:52.517 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.517 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.517 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:52.517 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.517 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.517 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:52.517 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:52.517 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:52.517 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:52.517 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:52.518 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:52.776 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:52.776 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:14:52.776 00:14:52.776 --- 10.0.0.3 ping statistics --- 00:14:52.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.776 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:52.776 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:52.776 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:14:52.776 00:14:52.776 --- 10.0.0.4 ping statistics --- 00:14:52.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.776 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:52.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:52.776 00:14:52.776 --- 10.0.0.1 ping statistics --- 00:14:52.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.776 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:52.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:14:52.776 00:14:52.776 --- 10.0.0.2 ping statistics --- 00:14:52.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.776 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71332 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71332 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71332 ']' 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.776 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.777 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.777 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.777 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.777 [2024-11-19 09:42:40.246457] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:14:52.777 [2024-11-19 09:42:40.246568] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.035 [2024-11-19 09:42:40.425388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.035 [2024-11-19 09:42:40.499843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.035 [2024-11-19 09:42:40.499923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.035 [2024-11-19 09:42:40.499942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.035 [2024-11-19 09:42:40.499955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.035 [2024-11-19 09:42:40.499966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.035 [2024-11-19 09:42:40.500691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.601 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.601 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:53.601 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:53.601 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:53.601 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.859 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.859 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:53.859 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:54.117 true 00:14:54.117 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:54.117 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:54.375 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:54.375 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:54.375 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:54.632 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:54.632 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:54.890 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:54.890 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:54.890 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:55.148 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:55.148 09:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:55.406 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:55.406 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:55.406 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:55.406 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:55.663 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:55.664 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:55.664 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:55.927 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:55.927 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:56.527 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:56.527 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:56.527 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:56.527 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:56.527 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:57.092 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:57.092 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:57.092 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.o9WAgBK3V6 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.3VVZKR698P 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.o9WAgBK3V6 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.3VVZKR698P 00:14:57.093 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:57.350 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:57.609 [2024-11-19 09:42:45.151279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:57.609 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.o9WAgBK3V6 00:14:57.609 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.o9WAgBK3V6 00:14:57.609 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:58.175 [2024-11-19 09:42:45.507328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.175 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:58.433 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:58.433 [2024-11-19 09:42:46.043433] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:58.433 [2024-11-19 09:42:46.043707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:58.690 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:58.948 malloc0 00:14:58.948 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:59.205 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.o9WAgBK3V6 00:14:59.463 09:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:59.721 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.o9WAgBK3V6 00:15:11.954 Initializing NVMe Controllers 00:15:11.954 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:11.954 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:11.954 Initialization complete. Launching workers. 00:15:11.954 ======================================================== 00:15:11.954 Latency(us) 00:15:11.954 Device Information : IOPS MiB/s Average min max 00:15:11.954 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9452.19 36.92 6772.49 1149.03 8759.71 00:15:11.954 ======================================================== 00:15:11.954 Total : 9452.19 36.92 6772.49 1149.03 8759.71 00:15:11.954 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o9WAgBK3V6 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.o9WAgBK3V6 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71578 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:11.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71578 /var/tmp/bdevperf.sock 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71578 ']' 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 [2024-11-19 09:42:57.423834] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:11.954 [2024-11-19 09:42:57.424291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71578 ] 00:15:11.954 [2024-11-19 09:42:57.599606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.954 [2024-11-19 09:42:57.671238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.954 [2024-11-19 09:42:57.732247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:11.954 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o9WAgBK3V6 00:15:11.954 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:11.955 [2024-11-19 09:42:58.309788] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:11.955 TLSTESTn1 00:15:11.955 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:11.955 Running I/O for 10 seconds... 00:15:12.907 3968.00 IOPS, 15.50 MiB/s [2024-11-19T09:43:01.904Z] 3979.50 IOPS, 15.54 MiB/s [2024-11-19T09:43:02.840Z] 4010.00 IOPS, 15.66 MiB/s [2024-11-19T09:43:03.776Z] 4021.25 IOPS, 15.71 MiB/s [2024-11-19T09:43:04.712Z] 4030.20 IOPS, 15.74 MiB/s [2024-11-19T09:43:05.647Z] 4035.67 IOPS, 15.76 MiB/s [2024-11-19T09:43:06.582Z] 4030.29 IOPS, 15.74 MiB/s [2024-11-19T09:43:07.957Z] 4032.38 IOPS, 15.75 MiB/s [2024-11-19T09:43:08.893Z] 4031.89 IOPS, 15.75 MiB/s [2024-11-19T09:43:08.893Z] 4030.60 IOPS, 15.74 MiB/s 00:15:21.270 Latency(us) 00:15:21.270 [2024-11-19T09:43:08.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.270 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:21.270 Verification LBA range: start 0x0 length 0x2000 00:15:21.270 TLSTESTn1 : 10.02 4036.29 15.77 0.00 0.00 31653.48 6225.92 24546.21 00:15:21.270 [2024-11-19T09:43:08.893Z] =================================================================================================================== 00:15:21.270 [2024-11-19T09:43:08.893Z] Total : 4036.29 15.77 0.00 0.00 31653.48 6225.92 24546.21 00:15:21.270 { 00:15:21.270 "results": [ 00:15:21.270 { 00:15:21.270 "job": "TLSTESTn1", 00:15:21.270 "core_mask": "0x4", 00:15:21.270 "workload": "verify", 00:15:21.270 "status": "finished", 00:15:21.270 "verify_range": { 00:15:21.270 "start": 0, 00:15:21.270 "length": 8192 00:15:21.270 }, 00:15:21.270 "queue_depth": 128, 00:15:21.270 "io_size": 4096, 00:15:21.270 "runtime": 10.01762, 00:15:21.270 "iops": 4036.288060437509, 00:15:21.270 "mibps": 15.76675023608402, 00:15:21.270 "io_failed": 0, 00:15:21.270 "io_timeout": 0, 00:15:21.270 "avg_latency_us": 31653.475871161536, 00:15:21.270 "min_latency_us": 6225.92, 00:15:21.270 "max_latency_us": 24546.21090909091 00:15:21.270 } 00:15:21.270 ], 00:15:21.270 "core_count": 1 00:15:21.270 } 00:15:21.270 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:21.270 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71578 00:15:21.270 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71578 ']' 00:15:21.270 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71578 00:15:21.270 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:21.270 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.270 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71578 00:15:21.270 killing process with pid 71578 00:15:21.270 Received shutdown signal, test time was about 10.000000 seconds 00:15:21.270 00:15:21.270 Latency(us) 00:15:21.270 [2024-11-19T09:43:08.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.270 [2024-11-19T09:43:08.893Z] =================================================================================================================== 00:15:21.270 [2024-11-19T09:43:08.893Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:21.270 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:21.270 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:21.270 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71578' 00:15:21.270 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71578 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71578 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3VVZKR698P 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3VVZKR698P 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3VVZKR698P 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3VVZKR698P 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71705 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71705 /var/tmp/bdevperf.sock 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71705 ']' 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:21.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.271 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.271 [2024-11-19 09:43:08.854991] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:21.271 [2024-11-19 09:43:08.855327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71705 ] 00:15:21.529 [2024-11-19 09:43:09.006200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.529 [2024-11-19 09:43:09.068227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.529 [2024-11-19 09:43:09.122055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:21.786 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.786 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:21.786 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3VVZKR698P 00:15:22.044 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:22.303 [2024-11-19 09:43:09.698576] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:22.303 [2024-11-19 09:43:09.707787] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:22.303 [2024-11-19 09:43:09.708431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170dfb0 (107): Transport endpoint is not connected 00:15:22.303 [2024-11-19 09:43:09.709422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170dfb0 (9): Bad file descriptor 00:15:22.303 [2024-11-19 09:43:09.710418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:15:22.303 [2024-11-19 09:43:09.710444] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:22.303 [2024-11-19 09:43:09.710454] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:22.303 [2024-11-19 09:43:09.710470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:15:22.303 request: 00:15:22.303 { 00:15:22.303 "name": "TLSTEST", 00:15:22.303 "trtype": "tcp", 00:15:22.303 "traddr": "10.0.0.3", 00:15:22.303 "adrfam": "ipv4", 00:15:22.303 "trsvcid": "4420", 00:15:22.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:22.303 "prchk_reftag": false, 00:15:22.303 "prchk_guard": false, 00:15:22.303 "hdgst": false, 00:15:22.303 "ddgst": false, 00:15:22.303 "psk": "key0", 00:15:22.303 "allow_unrecognized_csi": false, 00:15:22.303 "method": "bdev_nvme_attach_controller", 00:15:22.303 "req_id": 1 00:15:22.303 } 00:15:22.303 Got JSON-RPC error response 00:15:22.303 response: 00:15:22.303 { 00:15:22.303 "code": -5, 00:15:22.303 "message": "Input/output error" 00:15:22.303 } 00:15:22.303 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71705 00:15:22.303 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71705 ']' 00:15:22.303 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71705 00:15:22.303 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:22.303 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.303 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71705 00:15:22.303 killing process with pid 71705 00:15:22.303 Received shutdown signal, test time was about 10.000000 seconds 00:15:22.303 00:15:22.303 Latency(us) 00:15:22.303 [2024-11-19T09:43:09.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.303 [2024-11-19T09:43:09.926Z] =================================================================================================================== 00:15:22.303 [2024-11-19T09:43:09.926Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:22.303 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:22.303 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:22.303 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71705' 00:15:22.303 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71705 00:15:22.303 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71705 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.o9WAgBK3V6 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.o9WAgBK3V6 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.o9WAgBK3V6 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.o9WAgBK3V6 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71726 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71726 /var/tmp/bdevperf.sock 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71726 ']' 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:22.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.562 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:22.562 [2024-11-19 09:43:10.006862] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:22.562 [2024-11-19 09:43:10.006958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71726 ] 00:15:22.562 [2024-11-19 09:43:10.149176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.821 [2024-11-19 09:43:10.209836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.821 [2024-11-19 09:43:10.264377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:22.821 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.821 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:22.821 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o9WAgBK3V6 00:15:23.079 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:15:23.338 [2024-11-19 09:43:10.811010] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:23.338 [2024-11-19 09:43:10.817756] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:23.338 [2024-11-19 09:43:10.817814] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:23.338 [2024-11-19 09:43:10.817863] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:23.338 [2024-11-19 09:43:10.818806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aebfb0 (107): Transport endpoint is not connected 00:15:23.338 [2024-11-19 09:43:10.819790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aebfb0 (9): Bad file descriptor 00:15:23.338 [2024-11-19 09:43:10.820786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:15:23.338 [2024-11-19 09:43:10.820833] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:23.338 [2024-11-19 09:43:10.820851] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:23.338 [2024-11-19 09:43:10.820877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:15:23.338 request: 00:15:23.338 { 00:15:23.338 "name": "TLSTEST", 00:15:23.338 "trtype": "tcp", 00:15:23.338 "traddr": "10.0.0.3", 00:15:23.338 "adrfam": "ipv4", 00:15:23.338 "trsvcid": "4420", 00:15:23.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.338 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:23.338 "prchk_reftag": false, 00:15:23.338 "prchk_guard": false, 00:15:23.338 "hdgst": false, 00:15:23.338 "ddgst": false, 00:15:23.338 "psk": "key0", 00:15:23.338 "allow_unrecognized_csi": false, 00:15:23.338 "method": "bdev_nvme_attach_controller", 00:15:23.338 "req_id": 1 00:15:23.338 } 00:15:23.338 Got JSON-RPC error response 00:15:23.338 response: 00:15:23.338 { 00:15:23.338 "code": -5, 00:15:23.338 "message": "Input/output error" 00:15:23.338 } 00:15:23.338 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71726 00:15:23.338 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71726 ']' 00:15:23.338 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71726 00:15:23.338 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:23.338 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.338 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71726 00:15:23.338 killing process with pid 71726 00:15:23.338 Received shutdown signal, test time was about 10.000000 seconds 00:15:23.338 00:15:23.338 Latency(us) 00:15:23.338 [2024-11-19T09:43:10.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.338 [2024-11-19T09:43:10.961Z] =================================================================================================================== 00:15:23.338 [2024-11-19T09:43:10.961Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:23.338 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:23.338 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:23.338 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71726' 00:15:23.338 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71726 00:15:23.338 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71726 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.o9WAgBK3V6 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.o9WAgBK3V6 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.o9WAgBK3V6 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.o9WAgBK3V6 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71747 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71747 /var/tmp/bdevperf.sock 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71747 ']' 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:23.597 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.598 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:23.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:23.598 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.598 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.598 [2024-11-19 09:43:11.118282] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:23.598 [2024-11-19 09:43:11.118371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71747 ] 00:15:23.856 [2024-11-19 09:43:11.257802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.856 [2024-11-19 09:43:11.310160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.856 [2024-11-19 09:43:11.363688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:23.856 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.856 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:23.856 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o9WAgBK3V6 00:15:24.115 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:24.683 [2024-11-19 09:43:11.997330] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:24.683 [2024-11-19 09:43:12.007980] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:24.683 [2024-11-19 09:43:12.008036] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:24.683 [2024-11-19 09:43:12.008085] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:24.683 [2024-11-19 09:43:12.008295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1253fb0 (107): Transport endpoint is not connected 00:15:24.683 [2024-11-19 09:43:12.009252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1253fb0 (9): Bad file descriptor 00:15:24.683 [2024-11-19 09:43:12.010248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:15:24.683 [2024-11-19 09:43:12.010319] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:24.683 [2024-11-19 09:43:12.010337] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:15:24.683 [2024-11-19 09:43:12.010364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:15:24.683 request: 00:15:24.683 { 00:15:24.683 "name": "TLSTEST", 00:15:24.683 "trtype": "tcp", 00:15:24.683 "traddr": "10.0.0.3", 00:15:24.683 "adrfam": "ipv4", 00:15:24.683 "trsvcid": "4420", 00:15:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:24.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:24.683 "prchk_reftag": false, 00:15:24.683 "prchk_guard": false, 00:15:24.683 "hdgst": false, 00:15:24.683 "ddgst": false, 00:15:24.683 "psk": "key0", 00:15:24.683 "allow_unrecognized_csi": false, 00:15:24.683 "method": "bdev_nvme_attach_controller", 00:15:24.683 "req_id": 1 00:15:24.683 } 00:15:24.683 Got JSON-RPC error response 00:15:24.683 response: 00:15:24.683 { 00:15:24.683 "code": -5, 00:15:24.683 "message": "Input/output error" 00:15:24.683 } 00:15:24.683 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71747 00:15:24.683 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71747 ']' 00:15:24.683 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71747 00:15:24.683 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:24.683 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.683 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71747 00:15:24.683 killing process with pid 71747 00:15:24.683 Received shutdown signal, test time was about 10.000000 seconds 00:15:24.683 00:15:24.683 Latency(us) 00:15:24.683 [2024-11-19T09:43:12.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.683 [2024-11-19T09:43:12.306Z] =================================================================================================================== 00:15:24.683 [2024-11-19T09:43:12.306Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:24.683 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:24.683 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:24.683 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71747' 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71747 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71747 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71774 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71774 /var/tmp/bdevperf.sock 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71774 ']' 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:24.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.684 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.943 [2024-11-19 09:43:12.314186] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:24.943 [2024-11-19 09:43:12.314337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71774 ] 00:15:24.943 [2024-11-19 09:43:12.455744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.943 [2024-11-19 09:43:12.515384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.202 [2024-11-19 09:43:12.572532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:25.202 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.202 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:25.202 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:15:25.461 [2024-11-19 09:43:12.903399] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:15:25.461 [2024-11-19 09:43:12.903456] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:25.461 request: 00:15:25.461 { 00:15:25.461 "name": "key0", 00:15:25.461 "path": "", 00:15:25.461 "method": "keyring_file_add_key", 00:15:25.461 "req_id": 1 00:15:25.461 } 00:15:25.462 Got JSON-RPC error response 00:15:25.462 response: 00:15:25.462 { 00:15:25.462 "code": -1, 00:15:25.462 "message": "Operation not permitted" 00:15:25.462 } 00:15:25.462 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:25.721 [2024-11-19 09:43:13.155617] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:25.721 [2024-11-19 09:43:13.155718] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:25.721 request: 00:15:25.721 { 00:15:25.721 "name": "TLSTEST", 00:15:25.721 "trtype": "tcp", 00:15:25.721 "traddr": "10.0.0.3", 00:15:25.721 "adrfam": "ipv4", 00:15:25.721 "trsvcid": "4420", 00:15:25.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.721 "prchk_reftag": false, 00:15:25.721 "prchk_guard": false, 00:15:25.721 "hdgst": false, 00:15:25.721 "ddgst": false, 00:15:25.721 "psk": "key0", 00:15:25.721 "allow_unrecognized_csi": false, 00:15:25.721 "method": "bdev_nvme_attach_controller", 00:15:25.721 "req_id": 1 00:15:25.721 } 00:15:25.721 Got JSON-RPC error response 00:15:25.721 response: 00:15:25.721 { 00:15:25.721 "code": -126, 00:15:25.721 "message": "Required key not available" 00:15:25.721 } 00:15:25.721 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71774 00:15:25.721 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71774 ']' 00:15:25.721 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71774 00:15:25.721 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:25.721 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.721 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71774 00:15:25.721 killing process with pid 71774 00:15:25.721 Received shutdown signal, test time was about 10.000000 seconds 00:15:25.721 00:15:25.721 Latency(us) 00:15:25.721 [2024-11-19T09:43:13.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.721 [2024-11-19T09:43:13.344Z] =================================================================================================================== 00:15:25.721 [2024-11-19T09:43:13.344Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:25.721 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:25.721 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:25.721 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71774' 00:15:25.721 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71774 00:15:25.721 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71774 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71332 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71332 ']' 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71332 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71332 00:15:25.980 killing process with pid 71332 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71332' 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71332 00:15:25.980 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71332 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.P63RF6VLQv 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.P63RF6VLQv 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71805 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71805 00:15:26.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71805 ']' 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:26.239 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.239 [2024-11-19 09:43:13.793709] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:26.239 [2024-11-19 09:43:13.794042] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.497 [2024-11-19 09:43:13.936954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.497 [2024-11-19 09:43:13.995783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.497 [2024-11-19 09:43:13.995839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.497 [2024-11-19 09:43:13.995851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.497 [2024-11-19 09:43:13.995859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.497 [2024-11-19 09:43:13.995867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.497 [2024-11-19 09:43:13.996286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.497 [2024-11-19 09:43:14.052402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:27.430 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.430 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:27.430 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:27.430 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:27.430 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.430 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.430 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.P63RF6VLQv 00:15:27.430 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.P63RF6VLQv 00:15:27.430 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:27.688 [2024-11-19 09:43:15.088758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.688 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:27.946 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:28.205 [2024-11-19 09:43:15.648940] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:28.205 [2024-11-19 09:43:15.649206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:28.205 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:28.463 malloc0 00:15:28.463 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:28.720 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.P63RF6VLQv 00:15:28.978 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:29.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.P63RF6VLQv 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.P63RF6VLQv 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71866 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71866 /var/tmp/bdevperf.sock 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71866 ']' 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.235 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.235 [2024-11-19 09:43:16.851619] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:29.236 [2024-11-19 09:43:16.852039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71866 ] 00:15:29.493 [2024-11-19 09:43:17.002792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.493 [2024-11-19 09:43:17.073417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.751 [2024-11-19 09:43:17.133230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:30.316 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.316 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:30.316 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.P63RF6VLQv 00:15:30.575 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:30.833 [2024-11-19 09:43:18.385622] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:31.091 TLSTESTn1 00:15:31.091 09:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:31.091 Running I/O for 10 seconds... 00:15:33.403 3968.00 IOPS, 15.50 MiB/s [2024-11-19T09:43:21.960Z] 4006.00 IOPS, 15.65 MiB/s [2024-11-19T09:43:22.962Z] 4036.00 IOPS, 15.77 MiB/s [2024-11-19T09:43:23.896Z] 4054.50 IOPS, 15.84 MiB/s [2024-11-19T09:43:24.829Z] 4064.60 IOPS, 15.88 MiB/s [2024-11-19T09:43:25.763Z] 4067.00 IOPS, 15.89 MiB/s [2024-11-19T09:43:26.699Z] 4072.43 IOPS, 15.91 MiB/s [2024-11-19T09:43:27.642Z] 4068.00 IOPS, 15.89 MiB/s [2024-11-19T09:43:29.025Z] 4070.67 IOPS, 15.90 MiB/s [2024-11-19T09:43:29.025Z] 4073.70 IOPS, 15.91 MiB/s 00:15:41.402 Latency(us) 00:15:41.402 [2024-11-19T09:43:29.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.402 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:41.402 Verification LBA range: start 0x0 length 0x2000 00:15:41.402 TLSTESTn1 : 10.02 4079.51 15.94 0.00 0.00 31318.65 6613.18 24069.59 00:15:41.402 [2024-11-19T09:43:29.025Z] =================================================================================================================== 00:15:41.402 [2024-11-19T09:43:29.025Z] Total : 4079.51 15.94 0.00 0.00 31318.65 6613.18 24069.59 00:15:41.402 { 00:15:41.402 "results": [ 00:15:41.402 { 00:15:41.402 "job": "TLSTESTn1", 00:15:41.402 "core_mask": "0x4", 00:15:41.402 "workload": "verify", 00:15:41.402 "status": "finished", 00:15:41.402 "verify_range": { 00:15:41.402 "start": 0, 00:15:41.402 "length": 8192 00:15:41.402 }, 00:15:41.402 "queue_depth": 128, 00:15:41.402 "io_size": 4096, 00:15:41.402 "runtime": 10.016893, 00:15:41.402 "iops": 4079.5084863140696, 00:15:41.402 "mibps": 15.935580024664334, 00:15:41.402 "io_failed": 0, 00:15:41.402 "io_timeout": 0, 00:15:41.402 "avg_latency_us": 31318.65327543248, 00:15:41.402 "min_latency_us": 6613.178181818182, 00:15:41.402 "max_latency_us": 24069.585454545453 00:15:41.402 } 00:15:41.402 ], 00:15:41.402 "core_count": 1 00:15:41.402 } 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71866 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71866 ']' 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71866 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71866 00:15:41.402 killing process with pid 71866 00:15:41.402 Received shutdown signal, test time was about 10.000000 seconds 00:15:41.402 00:15:41.402 Latency(us) 00:15:41.402 [2024-11-19T09:43:29.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.402 [2024-11-19T09:43:29.025Z] =================================================================================================================== 00:15:41.402 [2024-11-19T09:43:29.025Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71866' 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71866 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71866 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.P63RF6VLQv 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.P63RF6VLQv 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.P63RF6VLQv 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.P63RF6VLQv 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.P63RF6VLQv 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72006 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:41.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72006 /var/tmp/bdevperf.sock 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72006 ']' 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.402 09:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:41.402 [2024-11-19 09:43:28.959665] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:41.402 [2024-11-19 09:43:28.959767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72006 ] 00:15:41.661 [2024-11-19 09:43:29.105045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.661 [2024-11-19 09:43:29.161490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.661 [2024-11-19 09:43:29.215167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:41.661 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.661 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:41.661 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.P63RF6VLQv 00:15:42.225 [2024-11-19 09:43:29.615082] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.P63RF6VLQv': 0100666 00:15:42.225 [2024-11-19 09:43:29.615136] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:42.225 request: 00:15:42.225 { 00:15:42.225 "name": "key0", 00:15:42.225 "path": "/tmp/tmp.P63RF6VLQv", 00:15:42.225 "method": "keyring_file_add_key", 00:15:42.225 "req_id": 1 00:15:42.225 } 00:15:42.225 Got JSON-RPC error response 00:15:42.225 response: 00:15:42.225 { 00:15:42.225 "code": -1, 00:15:42.225 "message": "Operation not permitted" 00:15:42.225 } 00:15:42.225 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:42.482 [2024-11-19 09:43:29.879278] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:42.482 [2024-11-19 09:43:29.879344] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:42.482 request: 00:15:42.482 { 00:15:42.482 "name": "TLSTEST", 00:15:42.482 "trtype": "tcp", 00:15:42.482 "traddr": "10.0.0.3", 00:15:42.482 "adrfam": "ipv4", 00:15:42.482 "trsvcid": "4420", 00:15:42.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:42.482 "prchk_reftag": false, 00:15:42.482 "prchk_guard": false, 00:15:42.482 "hdgst": false, 00:15:42.482 "ddgst": false, 00:15:42.482 "psk": "key0", 00:15:42.482 "allow_unrecognized_csi": false, 00:15:42.482 "method": "bdev_nvme_attach_controller", 00:15:42.482 "req_id": 1 00:15:42.482 } 00:15:42.482 Got JSON-RPC error response 00:15:42.482 response: 00:15:42.482 { 00:15:42.482 "code": -126, 00:15:42.482 "message": "Required key not available" 00:15:42.482 } 00:15:42.482 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72006 00:15:42.482 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72006 ']' 00:15:42.482 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72006 00:15:42.482 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:42.482 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.482 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72006 00:15:42.482 killing process with pid 72006 00:15:42.482 Received shutdown signal, test time was about 10.000000 seconds 00:15:42.482 00:15:42.482 Latency(us) 00:15:42.482 [2024-11-19T09:43:30.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.482 [2024-11-19T09:43:30.105Z] =================================================================================================================== 00:15:42.482 [2024-11-19T09:43:30.105Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:42.482 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:42.482 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:42.482 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72006' 00:15:42.482 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72006 00:15:42.482 09:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72006 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71805 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71805 ']' 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71805 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71805 00:15:42.741 killing process with pid 71805 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71805' 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71805 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71805 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:42.741 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:43.000 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72033 00:15:43.000 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:43.000 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72033 00:15:43.000 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72033 ']' 00:15:43.000 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.000 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.000 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.000 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.000 09:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:43.000 [2024-11-19 09:43:30.428259] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:43.000 [2024-11-19 09:43:30.428362] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.000 [2024-11-19 09:43:30.577715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.258 [2024-11-19 09:43:30.637180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.258 [2024-11-19 09:43:30.637249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.258 [2024-11-19 09:43:30.637261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.258 [2024-11-19 09:43:30.637269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.258 [2024-11-19 09:43:30.637277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.258 [2024-11-19 09:43:30.637692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.258 [2024-11-19 09:43:30.693314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:43.824 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.824 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:43.824 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:43.824 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:43.824 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:44.082 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.082 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.P63RF6VLQv 00:15:44.083 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:44.083 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.P63RF6VLQv 00:15:44.083 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:15:44.083 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.083 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:15:44.083 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.083 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.P63RF6VLQv 00:15:44.083 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.P63RF6VLQv 00:15:44.083 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:44.341 [2024-11-19 09:43:31.744704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.341 09:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:44.599 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:44.858 [2024-11-19 09:43:32.332837] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:44.858 [2024-11-19 09:43:32.333122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:44.858 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:45.115 malloc0 00:15:45.115 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:45.373 09:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.P63RF6VLQv 00:15:45.631 [2024-11-19 09:43:33.108555] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.P63RF6VLQv': 0100666 00:15:45.631 [2024-11-19 09:43:33.108634] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:45.631 request: 00:15:45.631 { 00:15:45.631 "name": "key0", 00:15:45.631 "path": "/tmp/tmp.P63RF6VLQv", 00:15:45.631 "method": "keyring_file_add_key", 00:15:45.631 "req_id": 1 00:15:45.631 } 00:15:45.631 Got JSON-RPC error response 00:15:45.631 response: 00:15:45.631 { 00:15:45.631 "code": -1, 00:15:45.631 "message": "Operation not permitted" 00:15:45.631 } 00:15:45.632 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:45.890 [2024-11-19 09:43:33.348633] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:45.890 [2024-11-19 09:43:33.348774] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:45.890 request: 00:15:45.890 { 00:15:45.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:45.890 "host": "nqn.2016-06.io.spdk:host1", 00:15:45.890 "psk": "key0", 00:15:45.890 "method": "nvmf_subsystem_add_host", 00:15:45.890 "req_id": 1 00:15:45.890 } 00:15:45.890 Got JSON-RPC error response 00:15:45.890 response: 00:15:45.890 { 00:15:45.890 "code": -32603, 00:15:45.890 "message": "Internal error" 00:15:45.890 } 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72033 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72033 ']' 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72033 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72033 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:45.890 killing process with pid 72033 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72033' 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72033 00:15:45.890 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72033 00:15:46.218 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.P63RF6VLQv 00:15:46.218 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:46.218 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:46.218 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:46.218 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.218 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72108 00:15:46.218 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72108 00:15:46.218 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:46.218 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72108 ']' 00:15:46.218 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.218 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.218 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.218 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.218 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.218 [2024-11-19 09:43:33.686247] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:46.218 [2024-11-19 09:43:33.686345] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.218 [2024-11-19 09:43:33.830407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.476 [2024-11-19 09:43:33.891345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.476 [2024-11-19 09:43:33.891420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.476 [2024-11-19 09:43:33.891432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.476 [2024-11-19 09:43:33.891440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.476 [2024-11-19 09:43:33.891448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.476 [2024-11-19 09:43:33.891904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.476 [2024-11-19 09:43:33.951167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:46.476 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.476 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:46.476 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:46.476 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:46.476 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.476 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.476 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.P63RF6VLQv 00:15:46.476 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.P63RF6VLQv 00:15:46.476 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:46.735 [2024-11-19 09:43:34.356856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.993 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:47.251 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:47.509 [2024-11-19 09:43:34.916918] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:47.509 [2024-11-19 09:43:34.917207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:47.509 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:47.767 malloc0 00:15:47.767 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:48.025 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.P63RF6VLQv 00:15:48.283 09:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:48.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72157 00:15:48.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:48.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:48.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72157 /var/tmp/bdevperf.sock 00:15:48.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72157 ']' 00:15:48.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:48.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:48.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:48.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:48.541 [2024-11-19 09:43:36.087350] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:48.541 [2024-11-19 09:43:36.087439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72157 ] 00:15:48.799 [2024-11-19 09:43:36.238357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.799 [2024-11-19 09:43:36.305780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.799 [2024-11-19 09:43:36.363804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:49.735 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.735 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:49.735 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.P63RF6VLQv 00:15:49.993 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:50.255 [2024-11-19 09:43:37.672994] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:50.255 TLSTESTn1 00:15:50.255 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:50.515 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:50.515 "subsystems": [ 00:15:50.515 { 00:15:50.515 "subsystem": "keyring", 00:15:50.515 "config": [ 00:15:50.515 { 00:15:50.515 "method": "keyring_file_add_key", 00:15:50.515 "params": { 00:15:50.515 "name": "key0", 00:15:50.515 "path": "/tmp/tmp.P63RF6VLQv" 00:15:50.515 } 00:15:50.515 } 00:15:50.515 ] 00:15:50.515 }, 00:15:50.515 { 00:15:50.515 "subsystem": "iobuf", 00:15:50.515 "config": [ 00:15:50.515 { 00:15:50.515 "method": "iobuf_set_options", 00:15:50.515 "params": { 00:15:50.515 "small_pool_count": 8192, 00:15:50.515 "large_pool_count": 1024, 00:15:50.515 "small_bufsize": 8192, 00:15:50.515 "large_bufsize": 135168, 00:15:50.515 "enable_numa": false 00:15:50.515 } 00:15:50.515 } 00:15:50.515 ] 00:15:50.515 }, 00:15:50.515 { 00:15:50.515 "subsystem": "sock", 00:15:50.515 "config": [ 00:15:50.515 { 00:15:50.515 "method": "sock_set_default_impl", 00:15:50.515 "params": { 00:15:50.515 "impl_name": "uring" 00:15:50.515 } 00:15:50.515 }, 00:15:50.515 { 00:15:50.515 "method": "sock_impl_set_options", 00:15:50.515 "params": { 00:15:50.515 "impl_name": "ssl", 00:15:50.515 "recv_buf_size": 4096, 00:15:50.515 "send_buf_size": 4096, 00:15:50.515 "enable_recv_pipe": true, 00:15:50.515 "enable_quickack": false, 00:15:50.515 "enable_placement_id": 0, 00:15:50.515 "enable_zerocopy_send_server": true, 00:15:50.515 "enable_zerocopy_send_client": false, 00:15:50.515 "zerocopy_threshold": 0, 00:15:50.515 "tls_version": 0, 00:15:50.515 "enable_ktls": false 00:15:50.515 } 00:15:50.515 }, 00:15:50.515 { 00:15:50.515 "method": "sock_impl_set_options", 00:15:50.515 "params": { 00:15:50.515 "impl_name": "posix", 00:15:50.515 "recv_buf_size": 2097152, 00:15:50.515 "send_buf_size": 2097152, 00:15:50.515 "enable_recv_pipe": true, 00:15:50.515 "enable_quickack": false, 00:15:50.515 "enable_placement_id": 0, 00:15:50.515 "enable_zerocopy_send_server": true, 00:15:50.515 "enable_zerocopy_send_client": false, 00:15:50.515 "zerocopy_threshold": 0, 00:15:50.515 "tls_version": 0, 00:15:50.515 "enable_ktls": false 00:15:50.515 } 00:15:50.515 }, 00:15:50.515 { 00:15:50.515 "method": "sock_impl_set_options", 00:15:50.515 "params": { 00:15:50.515 "impl_name": "uring", 00:15:50.515 "recv_buf_size": 2097152, 00:15:50.515 "send_buf_size": 2097152, 00:15:50.515 "enable_recv_pipe": true, 00:15:50.515 "enable_quickack": false, 00:15:50.515 "enable_placement_id": 0, 00:15:50.515 "enable_zerocopy_send_server": false, 00:15:50.515 "enable_zerocopy_send_client": false, 00:15:50.515 "zerocopy_threshold": 0, 00:15:50.515 "tls_version": 0, 00:15:50.515 "enable_ktls": false 00:15:50.515 } 00:15:50.515 } 00:15:50.515 ] 00:15:50.515 }, 00:15:50.515 { 00:15:50.515 "subsystem": "vmd", 00:15:50.515 "config": [] 00:15:50.515 }, 00:15:50.515 { 00:15:50.515 "subsystem": "accel", 00:15:50.515 "config": [ 00:15:50.515 { 00:15:50.515 "method": "accel_set_options", 00:15:50.515 "params": { 00:15:50.515 "small_cache_size": 128, 00:15:50.515 "large_cache_size": 16, 00:15:50.515 "task_count": 2048, 00:15:50.515 "sequence_count": 2048, 00:15:50.515 "buf_count": 2048 00:15:50.515 } 00:15:50.515 } 00:15:50.515 ] 00:15:50.515 }, 00:15:50.515 { 00:15:50.515 "subsystem": "bdev", 00:15:50.515 "config": [ 00:15:50.515 { 00:15:50.515 "method": "bdev_set_options", 00:15:50.515 "params": { 00:15:50.515 "bdev_io_pool_size": 65535, 00:15:50.515 "bdev_io_cache_size": 256, 00:15:50.515 "bdev_auto_examine": true, 00:15:50.515 "iobuf_small_cache_size": 128, 00:15:50.515 "iobuf_large_cache_size": 16 00:15:50.515 } 00:15:50.515 }, 00:15:50.515 { 00:15:50.515 "method": "bdev_raid_set_options", 00:15:50.515 "params": { 00:15:50.515 "process_window_size_kb": 1024, 00:15:50.515 "process_max_bandwidth_mb_sec": 0 00:15:50.515 } 00:15:50.515 }, 00:15:50.515 { 00:15:50.515 "method": "bdev_iscsi_set_options", 00:15:50.515 "params": { 00:15:50.515 "timeout_sec": 30 00:15:50.515 } 00:15:50.515 }, 00:15:50.515 { 00:15:50.515 "method": "bdev_nvme_set_options", 00:15:50.516 "params": { 00:15:50.516 "action_on_timeout": "none", 00:15:50.516 "timeout_us": 0, 00:15:50.516 "timeout_admin_us": 0, 00:15:50.516 "keep_alive_timeout_ms": 10000, 00:15:50.516 "arbitration_burst": 0, 00:15:50.516 "low_priority_weight": 0, 00:15:50.516 "medium_priority_weight": 0, 00:15:50.516 "high_priority_weight": 0, 00:15:50.516 "nvme_adminq_poll_period_us": 10000, 00:15:50.516 "nvme_ioq_poll_period_us": 0, 00:15:50.516 "io_queue_requests": 0, 00:15:50.516 "delay_cmd_submit": true, 00:15:50.516 "transport_retry_count": 4, 00:15:50.516 "bdev_retry_count": 3, 00:15:50.516 "transport_ack_timeout": 0, 00:15:50.516 "ctrlr_loss_timeout_sec": 0, 00:15:50.516 "reconnect_delay_sec": 0, 00:15:50.516 "fast_io_fail_timeout_sec": 0, 00:15:50.516 "disable_auto_failback": false, 00:15:50.516 "generate_uuids": false, 00:15:50.516 "transport_tos": 0, 00:15:50.516 "nvme_error_stat": false, 00:15:50.516 "rdma_srq_size": 0, 00:15:50.516 "io_path_stat": false, 00:15:50.516 "allow_accel_sequence": false, 00:15:50.516 "rdma_max_cq_size": 0, 00:15:50.516 "rdma_cm_event_timeout_ms": 0, 00:15:50.516 "dhchap_digests": [ 00:15:50.516 "sha256", 00:15:50.516 "sha384", 00:15:50.516 "sha512" 00:15:50.516 ], 00:15:50.516 "dhchap_dhgroups": [ 00:15:50.516 "null", 00:15:50.516 "ffdhe2048", 00:15:50.516 "ffdhe3072", 00:15:50.516 "ffdhe4096", 00:15:50.516 "ffdhe6144", 00:15:50.516 "ffdhe8192" 00:15:50.516 ] 00:15:50.516 } 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "method": "bdev_nvme_set_hotplug", 00:15:50.516 "params": { 00:15:50.516 "period_us": 100000, 00:15:50.516 "enable": false 00:15:50.516 } 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "method": "bdev_malloc_create", 00:15:50.516 "params": { 00:15:50.516 "name": "malloc0", 00:15:50.516 "num_blocks": 8192, 00:15:50.516 "block_size": 4096, 00:15:50.516 "physical_block_size": 4096, 00:15:50.516 "uuid": "e671123d-60e8-4fe0-87e0-084a6300bcb5", 00:15:50.516 "optimal_io_boundary": 0, 00:15:50.516 "md_size": 0, 00:15:50.516 "dif_type": 0, 00:15:50.516 "dif_is_head_of_md": false, 00:15:50.516 "dif_pi_format": 0 00:15:50.516 } 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "method": "bdev_wait_for_examine" 00:15:50.516 } 00:15:50.516 ] 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "subsystem": "nbd", 00:15:50.516 "config": [] 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "subsystem": "scheduler", 00:15:50.516 "config": [ 00:15:50.516 { 00:15:50.516 "method": "framework_set_scheduler", 00:15:50.516 "params": { 00:15:50.516 "name": "static" 00:15:50.516 } 00:15:50.516 } 00:15:50.516 ] 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "subsystem": "nvmf", 00:15:50.516 "config": [ 00:15:50.516 { 00:15:50.516 "method": "nvmf_set_config", 00:15:50.516 "params": { 00:15:50.516 "discovery_filter": "match_any", 00:15:50.516 "admin_cmd_passthru": { 00:15:50.516 "identify_ctrlr": false 00:15:50.516 }, 00:15:50.516 "dhchap_digests": [ 00:15:50.516 "sha256", 00:15:50.516 "sha384", 00:15:50.516 "sha512" 00:15:50.516 ], 00:15:50.516 "dhchap_dhgroups": [ 00:15:50.516 "null", 00:15:50.516 "ffdhe2048", 00:15:50.516 "ffdhe3072", 00:15:50.516 "ffdhe4096", 00:15:50.516 "ffdhe6144", 00:15:50.516 "ffdhe8192" 00:15:50.516 ] 00:15:50.516 } 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "method": "nvmf_set_max_subsystems", 00:15:50.516 "params": { 00:15:50.516 "max_subsystems": 1024 00:15:50.516 } 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "method": "nvmf_set_crdt", 00:15:50.516 "params": { 00:15:50.516 "crdt1": 0, 00:15:50.516 "crdt2": 0, 00:15:50.516 "crdt3": 0 00:15:50.516 } 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "method": "nvmf_create_transport", 00:15:50.516 "params": { 00:15:50.516 "trtype": "TCP", 00:15:50.516 "max_queue_depth": 128, 00:15:50.516 "max_io_qpairs_per_ctrlr": 127, 00:15:50.516 "in_capsule_data_size": 4096, 00:15:50.516 "max_io_size": 131072, 00:15:50.516 "io_unit_size": 131072, 00:15:50.516 "max_aq_depth": 128, 00:15:50.516 "num_shared_buffers": 511, 00:15:50.516 "buf_cache_size": 4294967295, 00:15:50.516 "dif_insert_or_strip": false, 00:15:50.516 "zcopy": false, 00:15:50.516 "c2h_success": false, 00:15:50.516 "sock_priority": 0, 00:15:50.516 "abort_timeout_sec": 1, 00:15:50.516 "ack_timeout": 0, 00:15:50.516 "data_wr_pool_size": 0 00:15:50.516 } 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "method": "nvmf_create_subsystem", 00:15:50.516 "params": { 00:15:50.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.516 "allow_any_host": false, 00:15:50.516 "serial_number": "SPDK00000000000001", 00:15:50.516 "model_number": "SPDK bdev Controller", 00:15:50.516 "max_namespaces": 10, 00:15:50.516 "min_cntlid": 1, 00:15:50.516 "max_cntlid": 65519, 00:15:50.516 "ana_reporting": false 00:15:50.516 } 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "method": "nvmf_subsystem_add_host", 00:15:50.516 "params": { 00:15:50.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.516 "host": "nqn.2016-06.io.spdk:host1", 00:15:50.516 "psk": "key0" 00:15:50.516 } 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "method": "nvmf_subsystem_add_ns", 00:15:50.516 "params": { 00:15:50.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.516 "namespace": { 00:15:50.516 "nsid": 1, 00:15:50.516 "bdev_name": "malloc0", 00:15:50.516 "nguid": "E671123D60E84FE087E0084A6300BCB5", 00:15:50.516 "uuid": "e671123d-60e8-4fe0-87e0-084a6300bcb5", 00:15:50.516 "no_auto_visible": false 00:15:50.516 } 00:15:50.516 } 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "method": "nvmf_subsystem_add_listener", 00:15:50.516 "params": { 00:15:50.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.516 "listen_address": { 00:15:50.516 "trtype": "TCP", 00:15:50.516 "adrfam": "IPv4", 00:15:50.516 "traddr": "10.0.0.3", 00:15:50.516 "trsvcid": "4420" 00:15:50.516 }, 00:15:50.516 "secure_channel": true 00:15:50.516 } 00:15:50.516 } 00:15:50.516 ] 00:15:50.516 } 00:15:50.516 ] 00:15:50.516 }' 00:15:50.516 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:51.083 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:51.083 "subsystems": [ 00:15:51.083 { 00:15:51.083 "subsystem": "keyring", 00:15:51.083 "config": [ 00:15:51.083 { 00:15:51.083 "method": "keyring_file_add_key", 00:15:51.083 "params": { 00:15:51.083 "name": "key0", 00:15:51.083 "path": "/tmp/tmp.P63RF6VLQv" 00:15:51.083 } 00:15:51.083 } 00:15:51.083 ] 00:15:51.083 }, 00:15:51.083 { 00:15:51.083 "subsystem": "iobuf", 00:15:51.083 "config": [ 00:15:51.083 { 00:15:51.083 "method": "iobuf_set_options", 00:15:51.083 "params": { 00:15:51.083 "small_pool_count": 8192, 00:15:51.083 "large_pool_count": 1024, 00:15:51.083 "small_bufsize": 8192, 00:15:51.083 "large_bufsize": 135168, 00:15:51.083 "enable_numa": false 00:15:51.083 } 00:15:51.083 } 00:15:51.083 ] 00:15:51.083 }, 00:15:51.083 { 00:15:51.083 "subsystem": "sock", 00:15:51.083 "config": [ 00:15:51.083 { 00:15:51.083 "method": "sock_set_default_impl", 00:15:51.083 "params": { 00:15:51.083 "impl_name": "uring" 00:15:51.083 } 00:15:51.083 }, 00:15:51.083 { 00:15:51.083 "method": "sock_impl_set_options", 00:15:51.083 "params": { 00:15:51.083 "impl_name": "ssl", 00:15:51.083 "recv_buf_size": 4096, 00:15:51.083 "send_buf_size": 4096, 00:15:51.083 "enable_recv_pipe": true, 00:15:51.083 "enable_quickack": false, 00:15:51.083 "enable_placement_id": 0, 00:15:51.083 "enable_zerocopy_send_server": true, 00:15:51.083 "enable_zerocopy_send_client": false, 00:15:51.083 "zerocopy_threshold": 0, 00:15:51.083 "tls_version": 0, 00:15:51.083 "enable_ktls": false 00:15:51.083 } 00:15:51.083 }, 00:15:51.083 { 00:15:51.083 "method": "sock_impl_set_options", 00:15:51.083 "params": { 00:15:51.083 "impl_name": "posix", 00:15:51.083 "recv_buf_size": 2097152, 00:15:51.083 "send_buf_size": 2097152, 00:15:51.083 "enable_recv_pipe": true, 00:15:51.083 "enable_quickack": false, 00:15:51.083 "enable_placement_id": 0, 00:15:51.083 "enable_zerocopy_send_server": true, 00:15:51.083 "enable_zerocopy_send_client": false, 00:15:51.083 "zerocopy_threshold": 0, 00:15:51.083 "tls_version": 0, 00:15:51.083 "enable_ktls": false 00:15:51.083 } 00:15:51.083 }, 00:15:51.083 { 00:15:51.083 "method": "sock_impl_set_options", 00:15:51.083 "params": { 00:15:51.083 "impl_name": "uring", 00:15:51.083 "recv_buf_size": 2097152, 00:15:51.083 "send_buf_size": 2097152, 00:15:51.083 "enable_recv_pipe": true, 00:15:51.083 "enable_quickack": false, 00:15:51.083 "enable_placement_id": 0, 00:15:51.083 "enable_zerocopy_send_server": false, 00:15:51.083 "enable_zerocopy_send_client": false, 00:15:51.083 "zerocopy_threshold": 0, 00:15:51.083 "tls_version": 0, 00:15:51.083 "enable_ktls": false 00:15:51.083 } 00:15:51.083 } 00:15:51.083 ] 00:15:51.083 }, 00:15:51.083 { 00:15:51.083 "subsystem": "vmd", 00:15:51.083 "config": [] 00:15:51.083 }, 00:15:51.083 { 00:15:51.083 "subsystem": "accel", 00:15:51.083 "config": [ 00:15:51.083 { 00:15:51.083 "method": "accel_set_options", 00:15:51.083 "params": { 00:15:51.083 "small_cache_size": 128, 00:15:51.083 "large_cache_size": 16, 00:15:51.083 "task_count": 2048, 00:15:51.083 "sequence_count": 2048, 00:15:51.083 "buf_count": 2048 00:15:51.084 } 00:15:51.084 } 00:15:51.084 ] 00:15:51.084 }, 00:15:51.084 { 00:15:51.084 "subsystem": "bdev", 00:15:51.084 "config": [ 00:15:51.084 { 00:15:51.084 "method": "bdev_set_options", 00:15:51.084 "params": { 00:15:51.084 "bdev_io_pool_size": 65535, 00:15:51.084 "bdev_io_cache_size": 256, 00:15:51.084 "bdev_auto_examine": true, 00:15:51.084 "iobuf_small_cache_size": 128, 00:15:51.084 "iobuf_large_cache_size": 16 00:15:51.084 } 00:15:51.084 }, 00:15:51.084 { 00:15:51.084 "method": "bdev_raid_set_options", 00:15:51.084 "params": { 00:15:51.084 "process_window_size_kb": 1024, 00:15:51.084 "process_max_bandwidth_mb_sec": 0 00:15:51.084 } 00:15:51.084 }, 00:15:51.084 { 00:15:51.084 "method": "bdev_iscsi_set_options", 00:15:51.084 "params": { 00:15:51.084 "timeout_sec": 30 00:15:51.084 } 00:15:51.084 }, 00:15:51.084 { 00:15:51.084 "method": "bdev_nvme_set_options", 00:15:51.084 "params": { 00:15:51.084 "action_on_timeout": "none", 00:15:51.084 "timeout_us": 0, 00:15:51.084 "timeout_admin_us": 0, 00:15:51.084 "keep_alive_timeout_ms": 10000, 00:15:51.084 "arbitration_burst": 0, 00:15:51.084 "low_priority_weight": 0, 00:15:51.084 "medium_priority_weight": 0, 00:15:51.084 "high_priority_weight": 0, 00:15:51.084 "nvme_adminq_poll_period_us": 10000, 00:15:51.084 "nvme_ioq_poll_period_us": 0, 00:15:51.084 "io_queue_requests": 512, 00:15:51.084 "delay_cmd_submit": true, 00:15:51.084 "transport_retry_count": 4, 00:15:51.084 "bdev_retry_count": 3, 00:15:51.084 "transport_ack_timeout": 0, 00:15:51.084 "ctrlr_loss_timeout_sec": 0, 00:15:51.084 "reconnect_delay_sec": 0, 00:15:51.084 "fast_io_fail_timeout_sec": 0, 00:15:51.084 "disable_auto_failback": false, 00:15:51.084 "generate_uuids": false, 00:15:51.084 "transport_tos": 0, 00:15:51.084 "nvme_error_stat": false, 00:15:51.084 "rdma_srq_size": 0, 00:15:51.084 "io_path_stat": false, 00:15:51.084 "allow_accel_sequence": false, 00:15:51.084 "rdma_max_cq_size": 0, 00:15:51.084 "rdma_cm_event_timeout_ms": 0, 00:15:51.084 "dhchap_digests": [ 00:15:51.084 "sha256", 00:15:51.084 "sha384", 00:15:51.084 "sha512" 00:15:51.084 ], 00:15:51.084 "dhchap_dhgroups": [ 00:15:51.084 "null", 00:15:51.084 "ffdhe2048", 00:15:51.084 "ffdhe3072", 00:15:51.084 "ffdhe4096", 00:15:51.084 "ffdhe6144", 00:15:51.084 "ffdhe8192" 00:15:51.084 ] 00:15:51.084 } 00:15:51.084 }, 00:15:51.084 { 00:15:51.084 "method": "bdev_nvme_attach_controller", 00:15:51.084 "params": { 00:15:51.084 "name": "TLSTEST", 00:15:51.084 "trtype": "TCP", 00:15:51.084 "adrfam": "IPv4", 00:15:51.084 "traddr": "10.0.0.3", 00:15:51.084 "trsvcid": "4420", 00:15:51.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.084 "prchk_reftag": false, 00:15:51.084 "prchk_guard": false, 00:15:51.084 "ctrlr_loss_timeout_sec": 0, 00:15:51.084 "reconnect_delay_sec": 0, 00:15:51.084 "fast_io_fail_timeout_sec": 0, 00:15:51.084 "psk": "key0", 00:15:51.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:51.084 "hdgst": false, 00:15:51.084 "ddgst": false, 00:15:51.084 "multipath": "multipath" 00:15:51.084 } 00:15:51.084 }, 00:15:51.084 { 00:15:51.084 "method": "bdev_nvme_set_hotplug", 00:15:51.084 "params": { 00:15:51.084 "period_us": 100000, 00:15:51.084 "enable": false 00:15:51.084 } 00:15:51.084 }, 00:15:51.084 { 00:15:51.084 "method": "bdev_wait_for_examine" 00:15:51.084 } 00:15:51.084 ] 00:15:51.084 }, 00:15:51.084 { 00:15:51.084 "subsystem": "nbd", 00:15:51.084 "config": [] 00:15:51.084 } 00:15:51.084 ] 00:15:51.084 }' 00:15:51.084 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72157 00:15:51.084 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72157 ']' 00:15:51.084 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72157 00:15:51.084 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:51.084 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.084 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72157 00:15:51.084 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:51.084 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:51.084 killing process with pid 72157 00:15:51.084 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72157' 00:15:51.084 Received shutdown signal, test time was about 10.000000 seconds 00:15:51.084 00:15:51.084 Latency(us) 00:15:51.084 [2024-11-19T09:43:38.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.084 [2024-11-19T09:43:38.707Z] =================================================================================================================== 00:15:51.084 [2024-11-19T09:43:38.707Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:51.084 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72157 00:15:51.084 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72157 00:15:51.343 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72108 00:15:51.343 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72108 ']' 00:15:51.343 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72108 00:15:51.343 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:51.343 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.343 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72108 00:15:51.343 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:51.343 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:51.343 killing process with pid 72108 00:15:51.343 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72108' 00:15:51.343 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72108 00:15:51.343 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72108 00:15:51.343 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:51.343 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:51.343 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:51.343 "subsystems": [ 00:15:51.343 { 00:15:51.343 "subsystem": "keyring", 00:15:51.343 "config": [ 00:15:51.343 { 00:15:51.343 "method": "keyring_file_add_key", 00:15:51.343 "params": { 00:15:51.343 "name": "key0", 00:15:51.343 "path": "/tmp/tmp.P63RF6VLQv" 00:15:51.343 } 00:15:51.343 } 00:15:51.343 ] 00:15:51.343 }, 00:15:51.343 { 00:15:51.343 "subsystem": "iobuf", 00:15:51.343 "config": [ 00:15:51.343 { 00:15:51.343 "method": "iobuf_set_options", 00:15:51.343 "params": { 00:15:51.343 "small_pool_count": 8192, 00:15:51.343 "large_pool_count": 1024, 00:15:51.343 "small_bufsize": 8192, 00:15:51.343 "large_bufsize": 135168, 00:15:51.343 "enable_numa": false 00:15:51.343 } 00:15:51.343 } 00:15:51.343 ] 00:15:51.343 }, 00:15:51.343 { 00:15:51.343 "subsystem": "sock", 00:15:51.343 "config": [ 00:15:51.343 { 00:15:51.343 "method": "sock_set_default_impl", 00:15:51.343 "params": { 00:15:51.343 "impl_name": "uring" 00:15:51.343 } 00:15:51.343 }, 00:15:51.343 { 00:15:51.343 "method": "sock_impl_set_options", 00:15:51.343 "params": { 00:15:51.343 "impl_name": "ssl", 00:15:51.343 "recv_buf_size": 4096, 00:15:51.343 "send_buf_size": 4096, 00:15:51.343 "enable_recv_pipe": true, 00:15:51.343 "enable_quickack": false, 00:15:51.343 "enable_placement_id": 0, 00:15:51.343 "enable_zerocopy_send_server": true, 00:15:51.343 "enable_zerocopy_send_client": false, 00:15:51.343 "zerocopy_threshold": 0, 00:15:51.343 "tls_version": 0, 00:15:51.343 "enable_ktls": false 00:15:51.343 } 00:15:51.343 }, 00:15:51.343 { 00:15:51.343 "method": "sock_impl_set_options", 00:15:51.343 "params": { 00:15:51.343 "impl_name": "posix", 00:15:51.343 "recv_buf_size": 2097152, 00:15:51.343 "send_buf_size": 2097152, 00:15:51.343 "enable_recv_pipe": true, 00:15:51.343 "enable_quickack": false, 00:15:51.343 "enable_placement_id": 0, 00:15:51.343 "enable_zerocopy_send_server": true, 00:15:51.343 "enable_zerocopy_send_client": false, 00:15:51.343 "zerocopy_threshold": 0, 00:15:51.343 "tls_version": 0, 00:15:51.343 "enable_ktls": false 00:15:51.343 } 00:15:51.343 }, 00:15:51.343 { 00:15:51.343 "method": "sock_impl_set_options", 00:15:51.343 "params": { 00:15:51.343 "impl_name": "uring", 00:15:51.343 "recv_buf_size": 2097152, 00:15:51.343 "send_buf_size": 2097152, 00:15:51.343 "enable_recv_pipe": true, 00:15:51.343 "enable_quickack": false, 00:15:51.343 "enable_placement_id": 0, 00:15:51.343 "enable_zerocopy_send_server": false, 00:15:51.343 "enable_zerocopy_send_client": false, 00:15:51.343 "zerocopy_threshold": 0, 00:15:51.343 "tls_version": 0, 00:15:51.343 "enable_ktls": false 00:15:51.343 } 00:15:51.343 } 00:15:51.343 ] 00:15:51.343 }, 00:15:51.343 { 00:15:51.343 "subsystem": "vmd", 00:15:51.343 "config": [] 00:15:51.343 }, 00:15:51.343 { 00:15:51.343 "subsystem": "accel", 00:15:51.343 "config": [ 00:15:51.343 { 00:15:51.343 "method": "accel_set_options", 00:15:51.343 "params": { 00:15:51.343 "small_cache_size": 128, 00:15:51.343 "large_cache_size": 16, 00:15:51.343 "task_count": 2048, 00:15:51.343 "sequence_count": 2048, 00:15:51.343 "buf_count": 2048 00:15:51.343 } 00:15:51.343 } 00:15:51.343 ] 00:15:51.343 }, 00:15:51.343 { 00:15:51.343 "subsystem": "bdev", 00:15:51.343 "config": [ 00:15:51.343 { 00:15:51.343 "method": "bdev_set_options", 00:15:51.343 "params": { 00:15:51.343 "bdev_io_pool_size": 65535, 00:15:51.343 "bdev_io_cache_size": 256, 00:15:51.343 "bdev_auto_examine": true, 00:15:51.343 "iobuf_small_cache_size": 128, 00:15:51.343 "iobuf_large_cache_size": 16 00:15:51.343 } 00:15:51.343 }, 00:15:51.343 { 00:15:51.343 "method": "bdev_raid_set_options", 00:15:51.343 "params": { 00:15:51.343 "process_window_size_kb": 1024, 00:15:51.343 "process_max_bandwidth_mb_sec": 0 00:15:51.343 } 00:15:51.343 }, 00:15:51.343 { 00:15:51.343 "method": "bdev_iscsi_set_options", 00:15:51.343 "params": { 00:15:51.343 "timeout_sec": 30 00:15:51.343 } 00:15:51.343 }, 00:15:51.343 { 00:15:51.343 "method": "bdev_nvme_set_options", 00:15:51.343 "params": { 00:15:51.344 "action_on_timeout": "none", 00:15:51.344 "timeout_us": 0, 00:15:51.344 "timeout_admin_us": 0, 00:15:51.344 "keep_alive_timeout_ms": 10000, 00:15:51.344 "arbitration_burst": 0, 00:15:51.344 "low_priority_weight": 0, 00:15:51.344 "medium_priority_weight": 0, 00:15:51.344 "high_priority_weight": 0, 00:15:51.344 "nvme_adminq_poll_period_us": 10000, 00:15:51.344 "nvme_ioq_poll_period_us": 0, 00:15:51.344 "io_queue_requests": 0, 00:15:51.344 "delay_cmd_submit": true, 00:15:51.344 "transport_retry_count": 4, 00:15:51.344 "bdev_retry_count": 3, 00:15:51.344 "transport_ack_timeout": 0, 00:15:51.344 "ctrlr_loss_timeout_sec": 0, 00:15:51.344 "reconnect_delay_sec": 0, 00:15:51.344 "fast_io_fail_timeout_sec": 0, 00:15:51.344 "disable_auto_failback": false, 00:15:51.344 "generate_uuids": false, 00:15:51.344 "transport_tos": 0, 00:15:51.344 "nvme_error_stat": false, 00:15:51.344 "rdma_srq_size": 0, 00:15:51.344 "io_path_stat": false, 00:15:51.344 "allow_accel_sequence": false, 00:15:51.344 "rdma_max_cq_size": 0, 00:15:51.344 "rdma_cm_event_timeout_ms": 0, 00:15:51.344 "dhchap_digests": [ 00:15:51.344 "sha256", 00:15:51.344 "sha384", 00:15:51.344 "sha512" 00:15:51.344 ], 00:15:51.344 "dhchap_dhgroups": [ 00:15:51.344 "null", 00:15:51.344 "ffdhe2048", 00:15:51.344 "ffdhe3072", 00:15:51.344 "ffdhe4096", 00:15:51.344 "ffdhe6144", 00:15:51.344 "ffdhe8192" 00:15:51.344 ] 00:15:51.344 } 00:15:51.344 }, 00:15:51.344 { 00:15:51.344 "method": "bdev_nvme_set_hotplug", 00:15:51.344 "params": { 00:15:51.344 "period_us": 100000, 00:15:51.344 "enable": false 00:15:51.344 } 00:15:51.344 }, 00:15:51.344 { 00:15:51.344 "method": "bdev_malloc_create", 00:15:51.344 "params": { 00:15:51.344 "name": "malloc0", 00:15:51.344 "num_blocks": 8192, 00:15:51.344 "block_size": 4096, 00:15:51.344 "physical_block_size": 4096, 00:15:51.344 "uuid": "e671123d-60e8-4fe0-87e0-084a6300bcb5", 00:15:51.344 "optimal_io_boundary": 0, 00:15:51.344 "md_size": 0, 00:15:51.344 "dif_type": 0, 00:15:51.344 "dif_is_head_of_md": false, 00:15:51.344 "dif_pi_format": 0 00:15:51.344 } 00:15:51.344 }, 00:15:51.344 { 00:15:51.344 "method": "bdev_wait_for_examine" 00:15:51.344 } 00:15:51.344 ] 00:15:51.344 }, 00:15:51.344 { 00:15:51.344 "subsystem": "nbd", 00:15:51.344 "config": [] 00:15:51.344 }, 00:15:51.344 { 00:15:51.344 "subsystem": "scheduler", 00:15:51.344 "config": [ 00:15:51.344 { 00:15:51.344 "method": "framework_set_scheduler", 00:15:51.344 "params": { 00:15:51.344 "name": "static" 00:15:51.344 } 00:15:51.344 } 00:15:51.344 ] 00:15:51.344 }, 00:15:51.344 { 00:15:51.344 "subsystem": "nvmf", 00:15:51.344 "config": [ 00:15:51.344 { 00:15:51.344 "method": "nvmf_set_config", 00:15:51.344 "params": { 00:15:51.344 "discovery_filter": "match_any", 00:15:51.344 "admin_cmd_passthru": { 00:15:51.344 "identify_ctrlr": false 00:15:51.344 }, 00:15:51.344 "dhchap_digests": [ 00:15:51.344 "sha256", 00:15:51.344 "sha384", 00:15:51.344 "sha512" 00:15:51.344 ], 00:15:51.344 "dhchap_dhgroups": [ 00:15:51.344 "null", 00:15:51.344 "ffdhe2048", 00:15:51.344 "ffdhe3072", 00:15:51.344 "ffdhe4096", 00:15:51.344 "ffdhe6144", 00:15:51.344 "ffdhe8192" 00:15:51.344 ] 00:15:51.344 } 00:15:51.344 }, 00:15:51.344 { 00:15:51.344 "method": "nvmf_set_max_subsystems", 00:15:51.344 "params": { 00:15:51.344 "max_subsystems": 1024 00:15:51.344 } 00:15:51.344 }, 00:15:51.344 { 00:15:51.344 "method": "nvmf_set_crdt", 00:15:51.344 "params": { 00:15:51.344 "crdt1": 0, 00:15:51.344 "crdt2": 0, 00:15:51.344 "crdt3": 0 00:15:51.344 } 00:15:51.344 }, 00:15:51.344 { 00:15:51.344 "method": "nvmf_create_transport", 00:15:51.344 "params": { 00:15:51.344 "trtype": "TCP", 00:15:51.344 "max_queue_depth": 128, 00:15:51.344 "max_io_qpairs_per_ctrlr": 127, 00:15:51.344 "in_capsule_data_size": 4096, 00:15:51.344 "max_io_size": 131072, 00:15:51.344 "io_unit_size": 131072, 00:15:51.344 "max_aq_depth": 128, 00:15:51.344 "num_shared_buffers": 511, 00:15:51.344 "buf_cache_size": 4294967295, 00:15:51.344 "dif_insert_or_strip": false, 00:15:51.344 "zcopy": false, 00:15:51.344 "c2h_success": false, 00:15:51.344 "sock_priority": 0, 00:15:51.344 "abort_timeout_sec": 1, 00:15:51.344 "ack_timeout": 0, 00:15:51.344 "data_wr_pool_size": 0 00:15:51.344 } 00:15:51.344 }, 00:15:51.344 { 00:15:51.344 "method": "nvmf_create_subsystem", 00:15:51.344 "params": { 00:15:51.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.344 "allow_any_host": false, 00:15:51.344 "serial_number": "SPDK00000000000001", 00:15:51.344 "model_number": "SPDK bdev Controller", 00:15:51.344 "max_namespaces": 10, 00:15:51.344 "min_cntlid": 1, 00:15:51.344 "max_cntlid": 65519, 00:15:51.344 "ana_reporting": false 00:15:51.344 } 00:15:51.344 }, 00:15:51.344 { 00:15:51.344 "method": "nvmf_subsystem_add_host", 00:15:51.344 "params": { 00:15:51.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.344 "host": "nqn.2016-06.io.spdk:host1", 00:15:51.344 "psk": "key0" 00:15:51.344 } 00:15:51.344 }, 00:15:51.344 { 00:15:51.344 "method": "nvmf_subsystem_add_ns", 00:15:51.344 "params": { 00:15:51.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.344 "namespace": { 00:15:51.344 "nsid": 1, 00:15:51.344 "bdev_name": "malloc0", 00:15:51.344 "nguid": "E671123D60E84FE087E0084A6300BCB5", 00:15:51.344 "uuid": "e671123d-60e8-4fe0-87e0-084a6300bcb5", 00:15:51.344 "no_auto_visible": false 00:15:51.344 } 00:15:51.344 } 00:15:51.344 }, 00:15:51.344 { 00:15:51.344 "method": "nvmf_subsystem_add_listener", 00:15:51.344 "params": { 00:15:51.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.344 "listen_address": { 00:15:51.344 "trtype": "TCP", 00:15:51.344 "adrfam": "IPv4", 00:15:51.344 "traddr": "10.0.0.3", 00:15:51.344 "trsvcid": "4420" 00:15:51.344 }, 00:15:51.344 "secure_channel": true 00:15:51.344 } 00:15:51.344 } 00:15:51.344 ] 00:15:51.344 } 00:15:51.344 ] 00:15:51.344 }' 00:15:51.344 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:51.344 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.344 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:51.344 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72207 00:15:51.344 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72207 00:15:51.344 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72207 ']' 00:15:51.344 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.344 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.344 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.344 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.344 09:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.603 [2024-11-19 09:43:39.006701] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:51.603 [2024-11-19 09:43:39.006812] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.603 [2024-11-19 09:43:39.153673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.603 [2024-11-19 09:43:39.213371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.603 [2024-11-19 09:43:39.213429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.603 [2024-11-19 09:43:39.213456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.603 [2024-11-19 09:43:39.213480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.603 [2024-11-19 09:43:39.213487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.603 [2024-11-19 09:43:39.213951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.861 [2024-11-19 09:43:39.386682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:51.861 [2024-11-19 09:43:39.470090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.119 [2024-11-19 09:43:39.502015] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:52.119 [2024-11-19 09:43:39.502267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72239 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72239 /var/tmp/bdevperf.sock 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72239 ']' 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.687 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:52.687 "subsystems": [ 00:15:52.687 { 00:15:52.687 "subsystem": "keyring", 00:15:52.687 "config": [ 00:15:52.687 { 00:15:52.687 "method": "keyring_file_add_key", 00:15:52.687 "params": { 00:15:52.687 "name": "key0", 00:15:52.687 "path": "/tmp/tmp.P63RF6VLQv" 00:15:52.687 } 00:15:52.687 } 00:15:52.687 ] 00:15:52.687 }, 00:15:52.687 { 00:15:52.687 "subsystem": "iobuf", 00:15:52.687 "config": [ 00:15:52.687 { 00:15:52.687 "method": "iobuf_set_options", 00:15:52.687 "params": { 00:15:52.687 "small_pool_count": 8192, 00:15:52.687 "large_pool_count": 1024, 00:15:52.687 "small_bufsize": 8192, 00:15:52.687 "large_bufsize": 135168, 00:15:52.687 "enable_numa": false 00:15:52.687 } 00:15:52.687 } 00:15:52.687 ] 00:15:52.687 }, 00:15:52.687 { 00:15:52.687 "subsystem": "sock", 00:15:52.687 "config": [ 00:15:52.687 { 00:15:52.687 "method": "sock_set_default_impl", 00:15:52.687 "params": { 00:15:52.687 "impl_name": "uring" 00:15:52.687 } 00:15:52.687 }, 00:15:52.687 { 00:15:52.687 "method": "sock_impl_set_options", 00:15:52.687 "params": { 00:15:52.687 "impl_name": "ssl", 00:15:52.688 "recv_buf_size": 4096, 00:15:52.688 "send_buf_size": 4096, 00:15:52.688 "enable_recv_pipe": true, 00:15:52.688 "enable_quickack": false, 00:15:52.688 "enable_placement_id": 0, 00:15:52.688 "enable_zerocopy_send_server": true, 00:15:52.688 "enable_zerocopy_send_client": false, 00:15:52.688 "zerocopy_threshold": 0, 00:15:52.688 "tls_version": 0, 00:15:52.688 "enable_ktls": false 00:15:52.688 } 00:15:52.688 }, 00:15:52.688 { 00:15:52.688 "method": "sock_impl_set_options", 00:15:52.688 "params": { 00:15:52.688 "impl_name": "posix", 00:15:52.688 "recv_buf_size": 2097152, 00:15:52.688 "send_buf_size": 2097152, 00:15:52.688 "enable_recv_pipe": true, 00:15:52.688 "enable_quickack": false, 00:15:52.688 "enable_placement_id": 0, 00:15:52.688 "enable_zerocopy_send_server": true, 00:15:52.688 "enable_zerocopy_send_client": false, 00:15:52.688 "zerocopy_threshold": 0, 00:15:52.688 "tls_version": 0, 00:15:52.688 "enable_ktls": false 00:15:52.688 } 00:15:52.688 }, 00:15:52.688 { 00:15:52.688 "method": "sock_impl_set_options", 00:15:52.688 "params": { 00:15:52.688 "impl_name": "uring", 00:15:52.688 "recv_buf_size": 2097152, 00:15:52.688 "send_buf_size": 2097152, 00:15:52.688 "enable_recv_pipe": true, 00:15:52.688 "enable_quickack": false, 00:15:52.688 "enable_placement_id": 0, 00:15:52.688 "enable_zerocopy_send_server": false, 00:15:52.688 "enable_zerocopy_send_client": false, 00:15:52.688 "zerocopy_threshold": 0, 00:15:52.688 "tls_version": 0, 00:15:52.688 "enable_ktls": false 00:15:52.688 } 00:15:52.688 } 00:15:52.688 ] 00:15:52.688 }, 00:15:52.688 { 00:15:52.688 "subsystem": "vmd", 00:15:52.688 "config": [] 00:15:52.688 }, 00:15:52.688 { 00:15:52.688 "subsystem": "accel", 00:15:52.688 "config": [ 00:15:52.688 { 00:15:52.688 "method": "accel_set_options", 00:15:52.688 "params": { 00:15:52.688 "small_cache_size": 128, 00:15:52.688 "large_cache_size": 16, 00:15:52.688 "task_count": 2048, 00:15:52.688 "sequence_count": 2048, 00:15:52.688 "buf_count": 2048 00:15:52.688 } 00:15:52.688 } 00:15:52.688 ] 00:15:52.688 }, 00:15:52.688 { 00:15:52.688 "subsystem": "bdev", 00:15:52.688 "config": [ 00:15:52.688 { 00:15:52.688 "method": "bdev_set_options", 00:15:52.688 "params": { 00:15:52.688 "bdev_io_pool_size": 65535, 00:15:52.688 "bdev_io_cache_size": 256, 00:15:52.688 "bdev_auto_examine": true, 00:15:52.688 "iobuf_small_cache_size": 128, 00:15:52.688 "iobuf_large_cache_size": 16 00:15:52.688 } 00:15:52.688 }, 00:15:52.688 { 00:15:52.688 "method": "bdev_raid_set_options", 00:15:52.688 "params": { 00:15:52.688 "process_window_size_kb": 1024, 00:15:52.688 "process_max_bandwidth_mb_sec": 0 00:15:52.688 } 00:15:52.688 }, 00:15:52.688 { 00:15:52.688 "method": "bdev_iscsi_set_options", 00:15:52.688 "params": { 00:15:52.688 "timeout_sec": 30 00:15:52.688 } 00:15:52.688 }, 00:15:52.688 { 00:15:52.688 "method": "bdev_nvme_set_options", 00:15:52.688 "params": { 00:15:52.688 "action_on_timeout": "none", 00:15:52.688 "timeout_us": 0, 00:15:52.688 "timeout_admin_us": 0, 00:15:52.688 "keep_alive_timeout_ms": 10000, 00:15:52.688 "arbitration_burst": 0, 00:15:52.688 "low_priority_weight": 0, 00:15:52.688 "medium_priority_weight": 0, 00:15:52.688 "high_priority_weight": 0, 00:15:52.688 "nvme_adminq_poll_period_us": 10000, 00:15:52.688 "nvme_ioq_poll_period_us": 0, 00:15:52.688 "io_queue_requests": 512, 00:15:52.688 "delay_cmd_submit": true, 00:15:52.688 "transport_retry_count": 4, 00:15:52.688 "bdev_retry_count": 3, 00:15:52.688 "transport_ack_timeout": 0, 00:15:52.688 "ctrlr_loss_timeout_sec": 0, 00:15:52.688 "reconnect_delay_sec": 0, 00:15:52.688 "fast_io_fail_timeout_sec": 0, 00:15:52.688 "disable_auto_failback": false, 00:15:52.688 "generate_uuids": false, 00:15:52.688 "transport_tos": 0, 00:15:52.688 "nvme_error_stat": false, 00:15:52.688 "rdma_srq_size": 0, 00:15:52.688 "io_path_stat": false, 00:15:52.688 "allow_accel_sequence": false, 00:15:52.688 "rdma_max_cq_size": 0, 00:15:52.688 "rdma_cm_event_timeout_ms": 0, 00:15:52.688 "dhchap_digests": [ 00:15:52.688 "sha256", 00:15:52.688 "sha384", 00:15:52.688 "sha512" 00:15:52.688 ], 00:15:52.688 "dhchap_dhgroups": [ 00:15:52.688 "null", 00:15:52.688 "ffdhe2048", 00:15:52.688 "ffdhe3072", 00:15:52.688 "ffdhe4096", 00:15:52.688 "ffdhe6144", 00:15:52.688 "ffdhe8192" 00:15:52.688 ] 00:15:52.688 } 00:15:52.688 }, 00:15:52.688 { 00:15:52.688 "method": "bdev_nvme_attach_controller", 00:15:52.688 "params": { 00:15:52.688 "name": "TLSTEST", 00:15:52.688 "trtype": "TCP", 00:15:52.688 "adrfam": "IPv4", 00:15:52.688 "traddr": "10.0.0.3", 00:15:52.688 "trsvcid": "4420", 00:15:52.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.688 "prchk_reftag": false, 00:15:52.688 "prchk_guard": false, 00:15:52.688 "ctrlr_loss_timeout_sec": 0, 00:15:52.688 "reconnect_delay_sec": 0, 00:15:52.688 "fast_io_fail_timeout_sec": 0, 00:15:52.688 "psk": "key0", 00:15:52.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:52.688 "hdgst": false, 00:15:52.688 "ddgst": false, 00:15:52.688 "multipath": "multipath" 00:15:52.688 } 00:15:52.688 }, 00:15:52.688 { 00:15:52.688 "method": "bdev_nvme_set_hotplug", 00:15:52.688 "params": { 00:15:52.688 "period_us": 100000, 00:15:52.688 "enable": false 00:15:52.688 } 00:15:52.688 }, 00:15:52.688 { 00:15:52.688 "method": "bdev_wait_for_examine" 00:15:52.688 } 00:15:52.688 ] 00:15:52.688 }, 00:15:52.688 { 00:15:52.688 "subsystem": "nbd", 00:15:52.688 "config": [] 00:15:52.688 } 00:15:52.688 ] 00:15:52.688 }' 00:15:52.688 [2024-11-19 09:43:40.123946] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:15:52.688 [2024-11-19 09:43:40.124032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72239 ] 00:15:52.688 [2024-11-19 09:43:40.270608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.947 [2024-11-19 09:43:40.335833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.947 [2024-11-19 09:43:40.474003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:52.947 [2024-11-19 09:43:40.523853] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:53.882 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.882 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:53.882 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:53.882 Running I/O for 10 seconds... 00:15:55.868 3968.00 IOPS, 15.50 MiB/s [2024-11-19T09:43:44.426Z] 4001.50 IOPS, 15.63 MiB/s [2024-11-19T09:43:45.361Z] 4033.00 IOPS, 15.75 MiB/s [2024-11-19T09:43:46.298Z] 4058.50 IOPS, 15.85 MiB/s [2024-11-19T09:43:47.673Z] 4071.60 IOPS, 15.90 MiB/s [2024-11-19T09:43:48.606Z] 4081.17 IOPS, 15.94 MiB/s [2024-11-19T09:43:49.541Z] 4082.57 IOPS, 15.95 MiB/s [2024-11-19T09:43:50.475Z] 4083.12 IOPS, 15.95 MiB/s [2024-11-19T09:43:51.411Z] 4079.78 IOPS, 15.94 MiB/s [2024-11-19T09:43:51.411Z] 4079.30 IOPS, 15.93 MiB/s 00:16:03.788 Latency(us) 00:16:03.788 [2024-11-19T09:43:51.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.788 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:03.788 Verification LBA range: start 0x0 length 0x2000 00:16:03.788 TLSTESTn1 : 10.02 4084.86 15.96 0.00 0.00 31276.46 6345.08 23354.65 00:16:03.788 [2024-11-19T09:43:51.411Z] =================================================================================================================== 00:16:03.788 [2024-11-19T09:43:51.411Z] Total : 4084.86 15.96 0.00 0.00 31276.46 6345.08 23354.65 00:16:03.788 { 00:16:03.788 "results": [ 00:16:03.788 { 00:16:03.788 "job": "TLSTESTn1", 00:16:03.788 "core_mask": "0x4", 00:16:03.788 "workload": "verify", 00:16:03.788 "status": "finished", 00:16:03.788 "verify_range": { 00:16:03.788 "start": 0, 00:16:03.788 "length": 8192 00:16:03.788 }, 00:16:03.788 "queue_depth": 128, 00:16:03.788 "io_size": 4096, 00:16:03.788 "runtime": 10.016508, 00:16:03.788 "iops": 4084.856718529052, 00:16:03.788 "mibps": 15.95647155675411, 00:16:03.788 "io_failed": 0, 00:16:03.788 "io_timeout": 0, 00:16:03.788 "avg_latency_us": 31276.46176165803, 00:16:03.788 "min_latency_us": 6345.076363636364, 00:16:03.788 "max_latency_us": 23354.647272727274 00:16:03.788 } 00:16:03.788 ], 00:16:03.788 "core_count": 1 00:16:03.788 } 00:16:03.788 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:03.788 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72239 00:16:03.788 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72239 ']' 00:16:03.788 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72239 00:16:03.788 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:03.788 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.788 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72239 00:16:03.788 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:03.788 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:03.788 killing process with pid 72239 00:16:03.788 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72239' 00:16:03.788 Received shutdown signal, test time was about 10.000000 seconds 00:16:03.788 00:16:03.788 Latency(us) 00:16:03.788 [2024-11-19T09:43:51.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.788 [2024-11-19T09:43:51.411Z] =================================================================================================================== 00:16:03.788 [2024-11-19T09:43:51.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:03.788 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72239 00:16:03.788 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72239 00:16:04.047 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72207 00:16:04.047 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72207 ']' 00:16:04.047 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72207 00:16:04.047 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:04.047 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.047 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72207 00:16:04.047 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:04.047 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:04.047 killing process with pid 72207 00:16:04.047 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72207' 00:16:04.047 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72207 00:16:04.047 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72207 00:16:04.305 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:16:04.305 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:04.305 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:04.306 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:04.306 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72376 00:16:04.306 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:04.306 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72376 00:16:04.306 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72376 ']' 00:16:04.306 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.306 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.306 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.306 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.306 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:04.306 [2024-11-19 09:43:51.846635] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:04.306 [2024-11-19 09:43:51.846744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.565 [2024-11-19 09:43:51.997842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.565 [2024-11-19 09:43:52.063873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.565 [2024-11-19 09:43:52.063954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.565 [2024-11-19 09:43:52.063980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.565 [2024-11-19 09:43:52.063992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.565 [2024-11-19 09:43:52.064001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.565 [2024-11-19 09:43:52.064485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.565 [2024-11-19 09:43:52.123515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:04.824 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.824 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:04.824 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:04.824 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:04.824 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:04.824 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.824 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.P63RF6VLQv 00:16:04.824 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.P63RF6VLQv 00:16:04.824 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:05.083 [2024-11-19 09:43:52.527205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.083 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:05.342 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:05.600 [2024-11-19 09:43:53.039326] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:05.600 [2024-11-19 09:43:53.039550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:05.601 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:05.863 malloc0 00:16:05.863 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:06.123 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.P63RF6VLQv 00:16:06.382 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:06.641 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72430 00:16:06.641 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:06.641 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:06.641 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72430 /var/tmp/bdevperf.sock 00:16:06.641 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72430 ']' 00:16:06.641 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:06.641 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:06.641 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:06.641 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.641 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.900 [2024-11-19 09:43:54.287089] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:06.900 [2024-11-19 09:43:54.287243] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72430 ] 00:16:06.900 [2024-11-19 09:43:54.432807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.900 [2024-11-19 09:43:54.492072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.158 [2024-11-19 09:43:54.547320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:07.158 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.158 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:07.158 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.P63RF6VLQv 00:16:07.416 09:43:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:07.676 [2024-11-19 09:43:55.144050] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:07.676 nvme0n1 00:16:07.676 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:07.935 Running I/O for 1 seconds... 00:16:08.873 3810.00 IOPS, 14.88 MiB/s 00:16:08.873 Latency(us) 00:16:08.873 [2024-11-19T09:43:56.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.873 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:08.873 Verification LBA range: start 0x0 length 0x2000 00:16:08.873 nvme0n1 : 1.03 3827.79 14.95 0.00 0.00 32994.57 7566.43 20375.74 00:16:08.873 [2024-11-19T09:43:56.496Z] =================================================================================================================== 00:16:08.873 [2024-11-19T09:43:56.496Z] Total : 3827.79 14.95 0.00 0.00 32994.57 7566.43 20375.74 00:16:08.873 { 00:16:08.873 "results": [ 00:16:08.873 { 00:16:08.873 "job": "nvme0n1", 00:16:08.873 "core_mask": "0x2", 00:16:08.873 "workload": "verify", 00:16:08.873 "status": "finished", 00:16:08.873 "verify_range": { 00:16:08.873 "start": 0, 00:16:08.873 "length": 8192 00:16:08.873 }, 00:16:08.873 "queue_depth": 128, 00:16:08.873 "io_size": 4096, 00:16:08.873 "runtime": 1.029054, 00:16:08.873 "iops": 3827.7874630485862, 00:16:08.873 "mibps": 14.95229477753354, 00:16:08.873 "io_failed": 0, 00:16:08.873 "io_timeout": 0, 00:16:08.873 "avg_latency_us": 32994.56800572365, 00:16:08.873 "min_latency_us": 7566.4290909090905, 00:16:08.873 "max_latency_us": 20375.738181818182 00:16:08.873 } 00:16:08.873 ], 00:16:08.873 "core_count": 1 00:16:08.873 } 00:16:08.873 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72430 00:16:08.873 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72430 ']' 00:16:08.873 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72430 00:16:08.873 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:08.873 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.873 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72430 00:16:08.873 killing process with pid 72430 00:16:08.873 Received shutdown signal, test time was about 1.000000 seconds 00:16:08.873 00:16:08.874 Latency(us) 00:16:08.874 [2024-11-19T09:43:56.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.874 [2024-11-19T09:43:56.497Z] =================================================================================================================== 00:16:08.874 [2024-11-19T09:43:56.497Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72430' 00:16:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72430 00:16:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72430 00:16:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72376 00:16:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72376 ']' 00:16:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72376 00:16:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72376 00:16:09.132 killing process with pid 72376 00:16:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72376' 00:16:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72376 00:16:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72376 00:16:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:16:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72470 00:16:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72470 00:16:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72470 ']' 00:16:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:09.391 [2024-11-19 09:43:56.938945] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:09.391 [2024-11-19 09:43:56.939050] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.650 [2024-11-19 09:43:57.081340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.650 [2024-11-19 09:43:57.139265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.650 [2024-11-19 09:43:57.139314] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.650 [2024-11-19 09:43:57.139325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.650 [2024-11-19 09:43:57.139334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.650 [2024-11-19 09:43:57.139341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.650 [2024-11-19 09:43:57.139756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.650 [2024-11-19 09:43:57.196131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:10.583 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.583 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:10.583 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:10.583 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:10.583 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.583 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.583 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:16:10.583 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.584 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.584 [2024-11-19 09:43:58.015272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.584 malloc0 00:16:10.584 [2024-11-19 09:43:58.046805] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:10.584 [2024-11-19 09:43:58.047024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:10.584 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.584 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72502 00:16:10.584 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72502 /var/tmp/bdevperf.sock 00:16:10.584 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:10.584 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72502 ']' 00:16:10.584 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.584 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.584 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.584 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.584 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.584 [2024-11-19 09:43:58.137071] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:10.584 [2024-11-19 09:43:58.137176] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72502 ] 00:16:10.841 [2024-11-19 09:43:58.282903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.841 [2024-11-19 09:43:58.354700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.841 [2024-11-19 09:43:58.417317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:11.099 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.099 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:11.099 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.P63RF6VLQv 00:16:11.357 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:11.615 [2024-11-19 09:43:59.066363] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:11.615 nvme0n1 00:16:11.615 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:11.873 Running I/O for 1 seconds... 00:16:12.807 3968.00 IOPS, 15.50 MiB/s 00:16:12.807 Latency(us) 00:16:12.807 [2024-11-19T09:44:00.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.807 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:12.807 Verification LBA range: start 0x0 length 0x2000 00:16:12.807 nvme0n1 : 1.02 4015.76 15.69 0.00 0.00 31520.32 9175.04 21686.46 00:16:12.807 [2024-11-19T09:44:00.430Z] =================================================================================================================== 00:16:12.807 [2024-11-19T09:44:00.430Z] Total : 4015.76 15.69 0.00 0.00 31520.32 9175.04 21686.46 00:16:12.807 { 00:16:12.807 "results": [ 00:16:12.807 { 00:16:12.807 "job": "nvme0n1", 00:16:12.807 "core_mask": "0x2", 00:16:12.807 "workload": "verify", 00:16:12.807 "status": "finished", 00:16:12.807 "verify_range": { 00:16:12.807 "start": 0, 00:16:12.807 "length": 8192 00:16:12.807 }, 00:16:12.807 "queue_depth": 128, 00:16:12.807 "io_size": 4096, 00:16:12.807 "runtime": 1.019981, 00:16:12.807 "iops": 4015.7610779024317, 00:16:12.807 "mibps": 15.686566710556374, 00:16:12.807 "io_failed": 0, 00:16:12.807 "io_timeout": 0, 00:16:12.808 "avg_latency_us": 31520.32, 00:16:12.808 "min_latency_us": 9175.04, 00:16:12.808 "max_latency_us": 21686.458181818183 00:16:12.808 } 00:16:12.808 ], 00:16:12.808 "core_count": 1 00:16:12.808 } 00:16:12.808 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:16:12.808 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.808 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.066 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.066 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:16:13.066 "subsystems": [ 00:16:13.066 { 00:16:13.066 "subsystem": "keyring", 00:16:13.066 "config": [ 00:16:13.066 { 00:16:13.066 "method": "keyring_file_add_key", 00:16:13.066 "params": { 00:16:13.066 "name": "key0", 00:16:13.066 "path": "/tmp/tmp.P63RF6VLQv" 00:16:13.066 } 00:16:13.066 } 00:16:13.066 ] 00:16:13.066 }, 00:16:13.066 { 00:16:13.066 "subsystem": "iobuf", 00:16:13.066 "config": [ 00:16:13.066 { 00:16:13.066 "method": "iobuf_set_options", 00:16:13.066 "params": { 00:16:13.066 "small_pool_count": 8192, 00:16:13.066 "large_pool_count": 1024, 00:16:13.066 "small_bufsize": 8192, 00:16:13.066 "large_bufsize": 135168, 00:16:13.066 "enable_numa": false 00:16:13.066 } 00:16:13.066 } 00:16:13.066 ] 00:16:13.066 }, 00:16:13.066 { 00:16:13.066 "subsystem": "sock", 00:16:13.066 "config": [ 00:16:13.067 { 00:16:13.067 "method": "sock_set_default_impl", 00:16:13.067 "params": { 00:16:13.067 "impl_name": "uring" 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "sock_impl_set_options", 00:16:13.067 "params": { 00:16:13.067 "impl_name": "ssl", 00:16:13.067 "recv_buf_size": 4096, 00:16:13.067 "send_buf_size": 4096, 00:16:13.067 "enable_recv_pipe": true, 00:16:13.067 "enable_quickack": false, 00:16:13.067 "enable_placement_id": 0, 00:16:13.067 "enable_zerocopy_send_server": true, 00:16:13.067 "enable_zerocopy_send_client": false, 00:16:13.067 "zerocopy_threshold": 0, 00:16:13.067 "tls_version": 0, 00:16:13.067 "enable_ktls": false 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "sock_impl_set_options", 00:16:13.067 "params": { 00:16:13.067 "impl_name": "posix", 00:16:13.067 "recv_buf_size": 2097152, 00:16:13.067 "send_buf_size": 2097152, 00:16:13.067 "enable_recv_pipe": true, 00:16:13.067 "enable_quickack": false, 00:16:13.067 "enable_placement_id": 0, 00:16:13.067 "enable_zerocopy_send_server": true, 00:16:13.067 "enable_zerocopy_send_client": false, 00:16:13.067 "zerocopy_threshold": 0, 00:16:13.067 "tls_version": 0, 00:16:13.067 "enable_ktls": false 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "sock_impl_set_options", 00:16:13.067 "params": { 00:16:13.067 "impl_name": "uring", 00:16:13.067 "recv_buf_size": 2097152, 00:16:13.067 "send_buf_size": 2097152, 00:16:13.067 "enable_recv_pipe": true, 00:16:13.067 "enable_quickack": false, 00:16:13.067 "enable_placement_id": 0, 00:16:13.067 "enable_zerocopy_send_server": false, 00:16:13.067 "enable_zerocopy_send_client": false, 00:16:13.067 "zerocopy_threshold": 0, 00:16:13.067 "tls_version": 0, 00:16:13.067 "enable_ktls": false 00:16:13.067 } 00:16:13.067 } 00:16:13.067 ] 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "subsystem": "vmd", 00:16:13.067 "config": [] 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "subsystem": "accel", 00:16:13.067 "config": [ 00:16:13.067 { 00:16:13.067 "method": "accel_set_options", 00:16:13.067 "params": { 00:16:13.067 "small_cache_size": 128, 00:16:13.067 "large_cache_size": 16, 00:16:13.067 "task_count": 2048, 00:16:13.067 "sequence_count": 2048, 00:16:13.067 "buf_count": 2048 00:16:13.067 } 00:16:13.067 } 00:16:13.067 ] 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "subsystem": "bdev", 00:16:13.067 "config": [ 00:16:13.067 { 00:16:13.067 "method": "bdev_set_options", 00:16:13.067 "params": { 00:16:13.067 "bdev_io_pool_size": 65535, 00:16:13.067 "bdev_io_cache_size": 256, 00:16:13.067 "bdev_auto_examine": true, 00:16:13.067 "iobuf_small_cache_size": 128, 00:16:13.067 "iobuf_large_cache_size": 16 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "bdev_raid_set_options", 00:16:13.067 "params": { 00:16:13.067 "process_window_size_kb": 1024, 00:16:13.067 "process_max_bandwidth_mb_sec": 0 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "bdev_iscsi_set_options", 00:16:13.067 "params": { 00:16:13.067 "timeout_sec": 30 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "bdev_nvme_set_options", 00:16:13.067 "params": { 00:16:13.067 "action_on_timeout": "none", 00:16:13.067 "timeout_us": 0, 00:16:13.067 "timeout_admin_us": 0, 00:16:13.067 "keep_alive_timeout_ms": 10000, 00:16:13.067 "arbitration_burst": 0, 00:16:13.067 "low_priority_weight": 0, 00:16:13.067 "medium_priority_weight": 0, 00:16:13.067 "high_priority_weight": 0, 00:16:13.067 "nvme_adminq_poll_period_us": 10000, 00:16:13.067 "nvme_ioq_poll_period_us": 0, 00:16:13.067 "io_queue_requests": 0, 00:16:13.067 "delay_cmd_submit": true, 00:16:13.067 "transport_retry_count": 4, 00:16:13.067 "bdev_retry_count": 3, 00:16:13.067 "transport_ack_timeout": 0, 00:16:13.067 "ctrlr_loss_timeout_sec": 0, 00:16:13.067 "reconnect_delay_sec": 0, 00:16:13.067 "fast_io_fail_timeout_sec": 0, 00:16:13.067 "disable_auto_failback": false, 00:16:13.067 "generate_uuids": false, 00:16:13.067 "transport_tos": 0, 00:16:13.067 "nvme_error_stat": false, 00:16:13.067 "rdma_srq_size": 0, 00:16:13.067 "io_path_stat": false, 00:16:13.067 "allow_accel_sequence": false, 00:16:13.067 "rdma_max_cq_size": 0, 00:16:13.067 "rdma_cm_event_timeout_ms": 0, 00:16:13.067 "dhchap_digests": [ 00:16:13.067 "sha256", 00:16:13.067 "sha384", 00:16:13.067 "sha512" 00:16:13.067 ], 00:16:13.067 "dhchap_dhgroups": [ 00:16:13.067 "null", 00:16:13.067 "ffdhe2048", 00:16:13.067 "ffdhe3072", 00:16:13.067 "ffdhe4096", 00:16:13.067 "ffdhe6144", 00:16:13.067 "ffdhe8192" 00:16:13.067 ] 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "bdev_nvme_set_hotplug", 00:16:13.067 "params": { 00:16:13.067 "period_us": 100000, 00:16:13.067 "enable": false 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "bdev_malloc_create", 00:16:13.067 "params": { 00:16:13.067 "name": "malloc0", 00:16:13.067 "num_blocks": 8192, 00:16:13.067 "block_size": 4096, 00:16:13.067 "physical_block_size": 4096, 00:16:13.067 "uuid": "3a88f2d5-faec-4594-82e3-261e6f9d8fa7", 00:16:13.067 "optimal_io_boundary": 0, 00:16:13.067 "md_size": 0, 00:16:13.067 "dif_type": 0, 00:16:13.067 "dif_is_head_of_md": false, 00:16:13.067 "dif_pi_format": 0 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "bdev_wait_for_examine" 00:16:13.067 } 00:16:13.067 ] 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "subsystem": "nbd", 00:16:13.067 "config": [] 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "subsystem": "scheduler", 00:16:13.067 "config": [ 00:16:13.067 { 00:16:13.067 "method": "framework_set_scheduler", 00:16:13.067 "params": { 00:16:13.067 "name": "static" 00:16:13.067 } 00:16:13.067 } 00:16:13.067 ] 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "subsystem": "nvmf", 00:16:13.067 "config": [ 00:16:13.067 { 00:16:13.067 "method": "nvmf_set_config", 00:16:13.067 "params": { 00:16:13.067 "discovery_filter": "match_any", 00:16:13.067 "admin_cmd_passthru": { 00:16:13.067 "identify_ctrlr": false 00:16:13.067 }, 00:16:13.067 "dhchap_digests": [ 00:16:13.067 "sha256", 00:16:13.067 "sha384", 00:16:13.067 "sha512" 00:16:13.067 ], 00:16:13.067 "dhchap_dhgroups": [ 00:16:13.067 "null", 00:16:13.067 "ffdhe2048", 00:16:13.067 "ffdhe3072", 00:16:13.067 "ffdhe4096", 00:16:13.067 "ffdhe6144", 00:16:13.067 "ffdhe8192" 00:16:13.067 ] 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "nvmf_set_max_subsystems", 00:16:13.067 "params": { 00:16:13.067 "max_subsystems": 1024 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "nvmf_set_crdt", 00:16:13.067 "params": { 00:16:13.067 "crdt1": 0, 00:16:13.067 "crdt2": 0, 00:16:13.067 "crdt3": 0 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "nvmf_create_transport", 00:16:13.067 "params": { 00:16:13.067 "trtype": "TCP", 00:16:13.067 "max_queue_depth": 128, 00:16:13.067 "max_io_qpairs_per_ctrlr": 127, 00:16:13.067 "in_capsule_data_size": 4096, 00:16:13.067 "max_io_size": 131072, 00:16:13.067 "io_unit_size": 131072, 00:16:13.067 "max_aq_depth": 128, 00:16:13.067 "num_shared_buffers": 511, 00:16:13.067 "buf_cache_size": 4294967295, 00:16:13.067 "dif_insert_or_strip": false, 00:16:13.067 "zcopy": false, 00:16:13.067 "c2h_success": false, 00:16:13.067 "sock_priority": 0, 00:16:13.067 "abort_timeout_sec": 1, 00:16:13.067 "ack_timeout": 0, 00:16:13.067 "data_wr_pool_size": 0 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "nvmf_create_subsystem", 00:16:13.067 "params": { 00:16:13.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.067 "allow_any_host": false, 00:16:13.067 "serial_number": "00000000000000000000", 00:16:13.067 "model_number": "SPDK bdev Controller", 00:16:13.067 "max_namespaces": 32, 00:16:13.067 "min_cntlid": 1, 00:16:13.067 "max_cntlid": 65519, 00:16:13.067 "ana_reporting": false 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "nvmf_subsystem_add_host", 00:16:13.067 "params": { 00:16:13.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.067 "host": "nqn.2016-06.io.spdk:host1", 00:16:13.067 "psk": "key0" 00:16:13.067 } 00:16:13.067 }, 00:16:13.067 { 00:16:13.067 "method": "nvmf_subsystem_add_ns", 00:16:13.067 "params": { 00:16:13.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.067 "namespace": { 00:16:13.067 "nsid": 1, 00:16:13.067 "bdev_name": "malloc0", 00:16:13.067 "nguid": "3A88F2D5FAEC459482E3261E6F9D8FA7", 00:16:13.068 "uuid": "3a88f2d5-faec-4594-82e3-261e6f9d8fa7", 00:16:13.068 "no_auto_visible": false 00:16:13.068 } 00:16:13.068 } 00:16:13.068 }, 00:16:13.068 { 00:16:13.068 "method": "nvmf_subsystem_add_listener", 00:16:13.068 "params": { 00:16:13.068 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.068 "listen_address": { 00:16:13.068 "trtype": "TCP", 00:16:13.068 "adrfam": "IPv4", 00:16:13.068 "traddr": "10.0.0.3", 00:16:13.068 "trsvcid": "4420" 00:16:13.068 }, 00:16:13.068 "secure_channel": false, 00:16:13.068 "sock_impl": "ssl" 00:16:13.068 } 00:16:13.068 } 00:16:13.068 ] 00:16:13.068 } 00:16:13.068 ] 00:16:13.068 }' 00:16:13.068 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:13.326 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:16:13.326 "subsystems": [ 00:16:13.326 { 00:16:13.326 "subsystem": "keyring", 00:16:13.326 "config": [ 00:16:13.326 { 00:16:13.326 "method": "keyring_file_add_key", 00:16:13.326 "params": { 00:16:13.326 "name": "key0", 00:16:13.326 "path": "/tmp/tmp.P63RF6VLQv" 00:16:13.326 } 00:16:13.326 } 00:16:13.326 ] 00:16:13.326 }, 00:16:13.326 { 00:16:13.326 "subsystem": "iobuf", 00:16:13.326 "config": [ 00:16:13.326 { 00:16:13.326 "method": "iobuf_set_options", 00:16:13.326 "params": { 00:16:13.327 "small_pool_count": 8192, 00:16:13.327 "large_pool_count": 1024, 00:16:13.327 "small_bufsize": 8192, 00:16:13.327 "large_bufsize": 135168, 00:16:13.327 "enable_numa": false 00:16:13.327 } 00:16:13.327 } 00:16:13.327 ] 00:16:13.327 }, 00:16:13.327 { 00:16:13.327 "subsystem": "sock", 00:16:13.327 "config": [ 00:16:13.327 { 00:16:13.327 "method": "sock_set_default_impl", 00:16:13.327 "params": { 00:16:13.327 "impl_name": "uring" 00:16:13.327 } 00:16:13.327 }, 00:16:13.327 { 00:16:13.327 "method": "sock_impl_set_options", 00:16:13.327 "params": { 00:16:13.327 "impl_name": "ssl", 00:16:13.327 "recv_buf_size": 4096, 00:16:13.327 "send_buf_size": 4096, 00:16:13.327 "enable_recv_pipe": true, 00:16:13.327 "enable_quickack": false, 00:16:13.327 "enable_placement_id": 0, 00:16:13.327 "enable_zerocopy_send_server": true, 00:16:13.327 "enable_zerocopy_send_client": false, 00:16:13.327 "zerocopy_threshold": 0, 00:16:13.327 "tls_version": 0, 00:16:13.327 "enable_ktls": false 00:16:13.327 } 00:16:13.327 }, 00:16:13.327 { 00:16:13.327 "method": "sock_impl_set_options", 00:16:13.327 "params": { 00:16:13.327 "impl_name": "posix", 00:16:13.327 "recv_buf_size": 2097152, 00:16:13.327 "send_buf_size": 2097152, 00:16:13.327 "enable_recv_pipe": true, 00:16:13.327 "enable_quickack": false, 00:16:13.327 "enable_placement_id": 0, 00:16:13.327 "enable_zerocopy_send_server": true, 00:16:13.327 "enable_zerocopy_send_client": false, 00:16:13.327 "zerocopy_threshold": 0, 00:16:13.327 "tls_version": 0, 00:16:13.327 "enable_ktls": false 00:16:13.327 } 00:16:13.327 }, 00:16:13.327 { 00:16:13.327 "method": "sock_impl_set_options", 00:16:13.327 "params": { 00:16:13.327 "impl_name": "uring", 00:16:13.327 "recv_buf_size": 2097152, 00:16:13.327 "send_buf_size": 2097152, 00:16:13.327 "enable_recv_pipe": true, 00:16:13.327 "enable_quickack": false, 00:16:13.327 "enable_placement_id": 0, 00:16:13.327 "enable_zerocopy_send_server": false, 00:16:13.327 "enable_zerocopy_send_client": false, 00:16:13.327 "zerocopy_threshold": 0, 00:16:13.327 "tls_version": 0, 00:16:13.327 "enable_ktls": false 00:16:13.327 } 00:16:13.327 } 00:16:13.327 ] 00:16:13.327 }, 00:16:13.327 { 00:16:13.327 "subsystem": "vmd", 00:16:13.327 "config": [] 00:16:13.327 }, 00:16:13.327 { 00:16:13.327 "subsystem": "accel", 00:16:13.327 "config": [ 00:16:13.327 { 00:16:13.327 "method": "accel_set_options", 00:16:13.327 "params": { 00:16:13.327 "small_cache_size": 128, 00:16:13.327 "large_cache_size": 16, 00:16:13.327 "task_count": 2048, 00:16:13.327 "sequence_count": 2048, 00:16:13.327 "buf_count": 2048 00:16:13.327 } 00:16:13.327 } 00:16:13.327 ] 00:16:13.327 }, 00:16:13.327 { 00:16:13.327 "subsystem": "bdev", 00:16:13.327 "config": [ 00:16:13.327 { 00:16:13.327 "method": "bdev_set_options", 00:16:13.327 "params": { 00:16:13.327 "bdev_io_pool_size": 65535, 00:16:13.327 "bdev_io_cache_size": 256, 00:16:13.327 "bdev_auto_examine": true, 00:16:13.327 "iobuf_small_cache_size": 128, 00:16:13.327 "iobuf_large_cache_size": 16 00:16:13.327 } 00:16:13.327 }, 00:16:13.327 { 00:16:13.327 "method": "bdev_raid_set_options", 00:16:13.327 "params": { 00:16:13.327 "process_window_size_kb": 1024, 00:16:13.327 "process_max_bandwidth_mb_sec": 0 00:16:13.327 } 00:16:13.327 }, 00:16:13.327 { 00:16:13.327 "method": "bdev_iscsi_set_options", 00:16:13.327 "params": { 00:16:13.327 "timeout_sec": 30 00:16:13.327 } 00:16:13.327 }, 00:16:13.327 { 00:16:13.327 "method": "bdev_nvme_set_options", 00:16:13.327 "params": { 00:16:13.327 "action_on_timeout": "none", 00:16:13.327 "timeout_us": 0, 00:16:13.327 "timeout_admin_us": 0, 00:16:13.327 "keep_alive_timeout_ms": 10000, 00:16:13.327 "arbitration_burst": 0, 00:16:13.327 "low_priority_weight": 0, 00:16:13.327 "medium_priority_weight": 0, 00:16:13.327 "high_priority_weight": 0, 00:16:13.327 "nvme_adminq_poll_period_us": 10000, 00:16:13.327 "nvme_ioq_poll_period_us": 0, 00:16:13.327 "io_queue_requests": 512, 00:16:13.327 "delay_cmd_submit": true, 00:16:13.327 "transport_retry_count": 4, 00:16:13.327 "bdev_retry_count": 3, 00:16:13.327 "transport_ack_timeout": 0, 00:16:13.327 "ctrlr_loss_timeout_sec": 0, 00:16:13.327 "reconnect_delay_sec": 0, 00:16:13.327 "fast_io_fail_timeout_sec": 0, 00:16:13.327 "disable_auto_failback": false, 00:16:13.327 "generate_uuids": false, 00:16:13.327 "transport_tos": 0, 00:16:13.327 "nvme_error_stat": false, 00:16:13.327 "rdma_srq_size": 0, 00:16:13.327 "io_path_stat": false, 00:16:13.327 "allow_accel_sequence": false, 00:16:13.327 "rdma_max_cq_size": 0, 00:16:13.327 "rdma_cm_event_timeout_ms": 0, 00:16:13.327 "dhchap_digests": [ 00:16:13.327 "sha256", 00:16:13.327 "sha384", 00:16:13.327 "sha512" 00:16:13.327 ], 00:16:13.327 "dhchap_dhgroups": [ 00:16:13.327 "null", 00:16:13.327 "ffdhe2048", 00:16:13.327 "ffdhe3072", 00:16:13.327 "ffdhe4096", 00:16:13.327 "ffdhe6144", 00:16:13.327 "ffdhe8192" 00:16:13.327 ] 00:16:13.327 } 00:16:13.327 }, 00:16:13.327 { 00:16:13.327 "method": "bdev_nvme_attach_controller", 00:16:13.327 "params": { 00:16:13.327 "name": "nvme0", 00:16:13.327 "trtype": "TCP", 00:16:13.327 "adrfam": "IPv4", 00:16:13.327 "traddr": "10.0.0.3", 00:16:13.327 "trsvcid": "4420", 00:16:13.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.327 "prchk_reftag": false, 00:16:13.327 "prchk_guard": false, 00:16:13.327 "ctrlr_loss_timeout_sec": 0, 00:16:13.327 "reconnect_delay_sec": 0, 00:16:13.327 "fast_io_fail_timeout_sec": 0, 00:16:13.327 "psk": "key0", 00:16:13.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:13.327 "hdgst": false, 00:16:13.327 "ddgst": false, 00:16:13.327 "multipath": "multipath" 00:16:13.327 } 00:16:13.327 }, 00:16:13.327 { 00:16:13.327 "method": "bdev_nvme_set_hotplug", 00:16:13.327 "params": { 00:16:13.327 "period_us": 100000, 00:16:13.327 "enable": false 00:16:13.327 } 00:16:13.327 }, 00:16:13.327 { 00:16:13.327 "method": "bdev_enable_histogram", 00:16:13.327 "params": { 00:16:13.327 "name": "nvme0n1", 00:16:13.327 "enable": true 00:16:13.327 } 00:16:13.327 }, 00:16:13.327 { 00:16:13.327 "method": "bdev_wait_for_examine" 00:16:13.327 } 00:16:13.327 ] 00:16:13.327 }, 00:16:13.328 { 00:16:13.328 "subsystem": "nbd", 00:16:13.328 "config": [] 00:16:13.328 } 00:16:13.328 ] 00:16:13.328 }' 00:16:13.328 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72502 00:16:13.328 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72502 ']' 00:16:13.328 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72502 00:16:13.328 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:13.328 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.328 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72502 00:16:13.328 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:13.328 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:13.328 killing process with pid 72502 00:16:13.328 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72502' 00:16:13.328 Received shutdown signal, test time was about 1.000000 seconds 00:16:13.328 00:16:13.328 Latency(us) 00:16:13.328 [2024-11-19T09:44:00.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.328 [2024-11-19T09:44:00.951Z] =================================================================================================================== 00:16:13.328 [2024-11-19T09:44:00.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:13.328 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72502 00:16:13.328 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72502 00:16:13.586 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72470 00:16:13.586 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72470 ']' 00:16:13.586 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72470 00:16:13.586 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:13.586 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.586 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72470 00:16:13.586 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.586 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.586 killing process with pid 72470 00:16:13.586 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72470' 00:16:13.586 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72470 00:16:13.586 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72470 00:16:13.844 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:16:13.844 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:13.844 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:16:13.844 "subsystems": [ 00:16:13.844 { 00:16:13.844 "subsystem": "keyring", 00:16:13.844 "config": [ 00:16:13.844 { 00:16:13.844 "method": "keyring_file_add_key", 00:16:13.844 "params": { 00:16:13.844 "name": "key0", 00:16:13.844 "path": "/tmp/tmp.P63RF6VLQv" 00:16:13.844 } 00:16:13.844 } 00:16:13.844 ] 00:16:13.844 }, 00:16:13.844 { 00:16:13.844 "subsystem": "iobuf", 00:16:13.844 "config": [ 00:16:13.844 { 00:16:13.844 "method": "iobuf_set_options", 00:16:13.844 "params": { 00:16:13.844 "small_pool_count": 8192, 00:16:13.844 "large_pool_count": 1024, 00:16:13.844 "small_bufsize": 8192, 00:16:13.844 "large_bufsize": 135168, 00:16:13.844 "enable_numa": false 00:16:13.844 } 00:16:13.844 } 00:16:13.844 ] 00:16:13.844 }, 00:16:13.844 { 00:16:13.844 "subsystem": "sock", 00:16:13.844 "config": [ 00:16:13.844 { 00:16:13.844 "method": "sock_set_default_impl", 00:16:13.844 "params": { 00:16:13.844 "impl_name": "uring" 00:16:13.844 } 00:16:13.844 }, 00:16:13.844 { 00:16:13.844 "method": "sock_impl_set_options", 00:16:13.844 "params": { 00:16:13.844 "impl_name": "ssl", 00:16:13.844 "recv_buf_size": 4096, 00:16:13.844 "send_buf_size": 4096, 00:16:13.844 "enable_recv_pipe": true, 00:16:13.844 "enable_quickack": false, 00:16:13.844 "enable_placement_id": 0, 00:16:13.844 "enable_zerocopy_send_server": true, 00:16:13.844 "enable_zerocopy_send_client": false, 00:16:13.844 "zerocopy_threshold": 0, 00:16:13.844 "tls_version": 0, 00:16:13.844 "enable_ktls": false 00:16:13.844 } 00:16:13.844 }, 00:16:13.844 { 00:16:13.844 "method": "sock_impl_set_options", 00:16:13.844 "params": { 00:16:13.844 "impl_name": "posix", 00:16:13.844 "recv_buf_size": 2097152, 00:16:13.844 "send_buf_size": 2097152, 00:16:13.844 "enable_recv_pipe": true, 00:16:13.844 "enable_quickack": false, 00:16:13.844 "enable_placement_id": 0, 00:16:13.844 "enable_zerocopy_send_server": true, 00:16:13.844 "enable_zerocopy_send_client": false, 00:16:13.844 "zerocopy_threshold": 0, 00:16:13.844 "tls_version": 0, 00:16:13.844 "enable_ktls": false 00:16:13.844 } 00:16:13.844 }, 00:16:13.844 { 00:16:13.844 "method": "sock_impl_set_options", 00:16:13.844 "params": { 00:16:13.844 "impl_name": "uring", 00:16:13.844 "recv_buf_size": 2097152, 00:16:13.844 "send_buf_size": 2097152, 00:16:13.844 "enable_recv_pipe": true, 00:16:13.844 "enable_quickack": false, 00:16:13.844 "enable_placement_id": 0, 00:16:13.844 "enable_zerocopy_send_server": false, 00:16:13.844 "enable_zerocopy_send_client": false, 00:16:13.844 "zerocopy_threshold": 0, 00:16:13.844 "tls_version": 0, 00:16:13.844 "enable_ktls": false 00:16:13.844 } 00:16:13.844 } 00:16:13.844 ] 00:16:13.844 }, 00:16:13.844 { 00:16:13.844 "subsystem": "vmd", 00:16:13.844 "config": [] 00:16:13.844 }, 00:16:13.844 { 00:16:13.844 "subsystem": "accel", 00:16:13.844 "config": [ 00:16:13.844 { 00:16:13.844 "method": "accel_set_options", 00:16:13.844 "params": { 00:16:13.844 "small_cache_size": 128, 00:16:13.844 "large_cache_size": 16, 00:16:13.844 "task_count": 2048, 00:16:13.844 "sequence_count": 2048, 00:16:13.844 "buf_count": 2048 00:16:13.844 } 00:16:13.844 } 00:16:13.844 ] 00:16:13.844 }, 00:16:13.844 { 00:16:13.844 "subsystem": "bdev", 00:16:13.844 "config": [ 00:16:13.844 { 00:16:13.844 "method": "bdev_set_options", 00:16:13.844 "params": { 00:16:13.844 "bdev_io_pool_size": 65535, 00:16:13.844 "bdev_io_cache_size": 256, 00:16:13.844 "bdev_auto_examine": true, 00:16:13.844 "iobuf_small_cache_size": 128, 00:16:13.844 "iobuf_large_cache_size": 16 00:16:13.844 } 00:16:13.844 }, 00:16:13.844 { 00:16:13.844 "method": "bdev_raid_set_options", 00:16:13.844 "params": { 00:16:13.844 "process_window_size_kb": 1024, 00:16:13.844 "process_max_bandwidth_mb_sec": 0 00:16:13.844 } 00:16:13.844 }, 00:16:13.844 { 00:16:13.844 "method": "bdev_iscsi_set_options", 00:16:13.844 "params": { 00:16:13.844 "timeout_sec": 30 00:16:13.844 } 00:16:13.844 }, 00:16:13.844 { 00:16:13.844 "method": "bdev_nvme_set_options", 00:16:13.844 "params": { 00:16:13.844 "action_on_timeout": "none", 00:16:13.844 "timeout_us": 0, 00:16:13.844 "timeout_admin_us": 0, 00:16:13.845 "keep_alive_timeout_ms": 10000, 00:16:13.845 "arbitration_burst": 0, 00:16:13.845 "low_priority_weight": 0, 00:16:13.845 "medium_priority_weight": 0, 00:16:13.845 "high_priority_weight": 0, 00:16:13.845 "nvme_adminq_poll_period_us": 10000, 00:16:13.845 "nvme_ioq_poll_period_us": 0, 00:16:13.845 "io_queue_requests": 0, 00:16:13.845 "delay_cmd_submit": true, 00:16:13.845 "transport_retry_count": 4, 00:16:13.845 "bdev_retry_count": 3, 00:16:13.845 "transport_ack_timeout": 0, 00:16:13.845 "ctrlr_loss_timeout_sec": 0, 00:16:13.845 "reconnect_delay_sec": 0, 00:16:13.845 "fast_io_fail_timeout_sec": 0, 00:16:13.845 "disable_auto_failback": false, 00:16:13.845 "generate_uuids": false, 00:16:13.845 "transport_tos": 0, 00:16:13.845 "nvme_error_stat": false, 00:16:13.845 "rdma_srq_size": 0, 00:16:13.845 "io_path_stat": false, 00:16:13.845 "allow_accel_sequence": false, 00:16:13.845 "rdma_max_cq_size": 0, 00:16:13.845 "rdma_cm_event_timeout_ms": 0, 00:16:13.845 "dhchap_digests": [ 00:16:13.845 "sha256", 00:16:13.845 "sha384", 00:16:13.845 "sha512" 00:16:13.845 ], 00:16:13.845 "dhchap_dhgroups": [ 00:16:13.845 "null", 00:16:13.845 "ffdhe2048", 00:16:13.845 "ffdhe3072", 00:16:13.845 "ffdhe4096", 00:16:13.845 "ffdhe6144", 00:16:13.845 "ffdhe8192" 00:16:13.845 ] 00:16:13.845 } 00:16:13.845 }, 00:16:13.845 { 00:16:13.845 "method": "bdev_nvme_set_hotplug", 00:16:13.845 "params": { 00:16:13.845 "period_us": 100000, 00:16:13.845 "enable": false 00:16:13.845 } 00:16:13.845 }, 00:16:13.845 { 00:16:13.845 "method": "bdev_malloc_create", 00:16:13.845 "params": { 00:16:13.845 "name": "malloc0", 00:16:13.845 "num_blocks": 8192, 00:16:13.845 "block_size": 4096, 00:16:13.845 "physical_block_size": 4096, 00:16:13.845 "uuid": "3a88f2d5-faec-4594-82e3-261e6f9d8fa7", 00:16:13.845 "optimal_io_boundary": 0, 00:16:13.845 "md_size": 0, 00:16:13.845 "dif_type": 0, 00:16:13.845 "dif_is_head_of_md": false, 00:16:13.845 "dif_pi_format": 0 00:16:13.845 } 00:16:13.845 }, 00:16:13.845 { 00:16:13.845 "method": "bdev_wait_for_examine" 00:16:13.845 } 00:16:13.845 ] 00:16:13.845 }, 00:16:13.845 { 00:16:13.845 "subsystem": "nbd", 00:16:13.845 "config": [] 00:16:13.845 }, 00:16:13.845 { 00:16:13.845 "subsystem": "scheduler", 00:16:13.845 "config": [ 00:16:13.845 { 00:16:13.845 "method": "framework_set_scheduler", 00:16:13.845 "params": { 00:16:13.845 "name": "static" 00:16:13.845 } 00:16:13.845 } 00:16:13.845 ] 00:16:13.845 }, 00:16:13.845 { 00:16:13.845 "subsystem": "nvmf", 00:16:13.845 "config": [ 00:16:13.845 { 00:16:13.845 "method": "nvmf_set_config", 00:16:13.845 "params": { 00:16:13.845 "discovery_filter": "match_any", 00:16:13.845 "admin_cmd_passthru": { 00:16:13.845 "identify_ctrlr": false 00:16:13.845 }, 00:16:13.845 "dhchap_digests": [ 00:16:13.845 "sha256", 00:16:13.845 "sha384", 00:16:13.845 "sha512" 00:16:13.845 ], 00:16:13.845 "dhchap_dhgroups": [ 00:16:13.845 "null", 00:16:13.845 "ffdhe2048", 00:16:13.845 "ffdhe3072", 00:16:13.845 "ffdhe4096", 00:16:13.845 "ffdhe6144", 00:16:13.845 "ffdhe8192" 00:16:13.845 ] 00:16:13.845 } 00:16:13.845 }, 00:16:13.845 { 00:16:13.845 "method": "nvmf_set_max_subsyste 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:13.845 ms", 00:16:13.845 "params": { 00:16:13.845 "max_subsystems": 1024 00:16:13.845 } 00:16:13.845 }, 00:16:13.845 { 00:16:13.845 "method": "nvmf_set_crdt", 00:16:13.845 "params": { 00:16:13.845 "crdt1": 0, 00:16:13.845 "crdt2": 0, 00:16:13.845 "crdt3": 0 00:16:13.845 } 00:16:13.845 }, 00:16:13.845 { 00:16:13.845 "method": "nvmf_create_transport", 00:16:13.845 "params": { 00:16:13.845 "trtype": "TCP", 00:16:13.845 "max_queue_depth": 128, 00:16:13.845 "max_io_qpairs_per_ctrlr": 127, 00:16:13.845 "in_capsule_data_size": 4096, 00:16:13.845 "max_io_size": 131072, 00:16:13.845 "io_unit_size": 131072, 00:16:13.845 "max_aq_depth": 128, 00:16:13.845 "num_shared_buffers": 511, 00:16:13.845 "buf_cache_size": 4294967295, 00:16:13.845 "dif_insert_or_strip": false, 00:16:13.845 "zcopy": false, 00:16:13.845 "c2h_success": false, 00:16:13.845 "sock_priority": 0, 00:16:13.845 "abort_timeout_sec": 1, 00:16:13.845 "ack_timeout": 0, 00:16:13.845 "data_wr_pool_size": 0 00:16:13.845 } 00:16:13.845 }, 00:16:13.845 { 00:16:13.845 "method": "nvmf_create_subsystem", 00:16:13.845 "params": { 00:16:13.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.845 "allow_any_host": false, 00:16:13.845 "serial_number": "00000000000000000000", 00:16:13.845 "model_number": "SPDK bdev Controller", 00:16:13.845 "max_namespaces": 32, 00:16:13.845 "min_cntlid": 1, 00:16:13.845 "max_cntlid": 65519, 00:16:13.845 "ana_reporting": false 00:16:13.845 } 00:16:13.845 }, 00:16:13.845 { 00:16:13.845 "method": "nvmf_subsystem_add_host", 00:16:13.845 "params": { 00:16:13.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.845 "host": "nqn.2016-06.io.spdk:host1", 00:16:13.845 "psk": "key0" 00:16:13.845 } 00:16:13.845 }, 00:16:13.845 { 00:16:13.845 "method": "nvmf_subsystem_add_ns", 00:16:13.845 "params": { 00:16:13.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.845 "namespace": { 00:16:13.845 "nsid": 1, 00:16:13.845 "bdev_name": "malloc0", 00:16:13.845 "nguid": "3A88F2D5FAEC459482E3261E6F9D8FA7", 00:16:13.845 "uuid": "3a88f2d5-faec-4594-82e3-261e6f9d8fa7", 00:16:13.845 "no_auto_visible": false 00:16:13.845 } 00:16:13.845 } 00:16:13.845 }, 00:16:13.845 { 00:16:13.845 "method": "nvmf_subsystem_add_listener", 00:16:13.845 "params": { 00:16:13.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.845 "listen_address": { 00:16:13.845 "trtype": "TCP", 00:16:13.845 "adrfam": "IPv4", 00:16:13.845 "traddr": "10.0.0.3", 00:16:13.845 "trsvcid": "4420" 00:16:13.845 }, 00:16:13.845 "secure_channel": false, 00:16:13.845 "sock_impl": "ssl" 00:16:13.845 } 00:16:13.845 } 00:16:13.845 ] 00:16:13.845 } 00:16:13.845 ] 00:16:13.845 }' 00:16:13.845 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.845 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72561 00:16:13.845 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72561 00:16:13.845 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:13.845 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72561 ']' 00:16:13.845 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.845 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.845 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.845 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.845 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.845 [2024-11-19 09:44:01.316262] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:13.845 [2024-11-19 09:44:01.316394] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.845 [2024-11-19 09:44:01.466079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.103 [2024-11-19 09:44:01.528181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.103 [2024-11-19 09:44:01.528557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.103 [2024-11-19 09:44:01.528579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.103 [2024-11-19 09:44:01.528588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.103 [2024-11-19 09:44:01.528595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.103 [2024-11-19 09:44:01.529102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.103 [2024-11-19 09:44:01.697820] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:14.361 [2024-11-19 09:44:01.777066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.361 [2024-11-19 09:44:01.809003] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:14.361 [2024-11-19 09:44:01.809257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:14.928 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.928 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:14.928 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:14.928 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:14.928 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:14.928 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.928 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72593 00:16:14.928 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72593 /var/tmp/bdevperf.sock 00:16:14.928 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:14.928 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72593 ']' 00:16:14.928 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:14.928 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:16:14.928 "subsystems": [ 00:16:14.928 { 00:16:14.928 "subsystem": "keyring", 00:16:14.928 "config": [ 00:16:14.928 { 00:16:14.928 "method": "keyring_file_add_key", 00:16:14.928 "params": { 00:16:14.928 "name": "key0", 00:16:14.928 "path": "/tmp/tmp.P63RF6VLQv" 00:16:14.928 } 00:16:14.928 } 00:16:14.928 ] 00:16:14.928 }, 00:16:14.928 { 00:16:14.928 "subsystem": "iobuf", 00:16:14.928 "config": [ 00:16:14.928 { 00:16:14.928 "method": "iobuf_set_options", 00:16:14.928 "params": { 00:16:14.928 "small_pool_count": 8192, 00:16:14.928 "large_pool_count": 1024, 00:16:14.928 "small_bufsize": 8192, 00:16:14.928 "large_bufsize": 135168, 00:16:14.928 "enable_numa": false 00:16:14.928 } 00:16:14.928 } 00:16:14.928 ] 00:16:14.928 }, 00:16:14.928 { 00:16:14.928 "subsystem": "sock", 00:16:14.928 "config": [ 00:16:14.928 { 00:16:14.928 "method": "sock_set_default_impl", 00:16:14.928 "params": { 00:16:14.928 "impl_name": "uring" 00:16:14.928 } 00:16:14.928 }, 00:16:14.928 { 00:16:14.928 "method": "sock_impl_set_options", 00:16:14.928 "params": { 00:16:14.928 "impl_name": "ssl", 00:16:14.928 "recv_buf_size": 4096, 00:16:14.928 "send_buf_size": 4096, 00:16:14.928 "enable_recv_pipe": true, 00:16:14.928 "enable_quickack": false, 00:16:14.928 "enable_placement_id": 0, 00:16:14.928 "enable_zerocopy_send_server": true, 00:16:14.928 "enable_zerocopy_send_client": false, 00:16:14.928 "zerocopy_threshold": 0, 00:16:14.928 "tls_version": 0, 00:16:14.928 "enable_ktls": false 00:16:14.928 } 00:16:14.928 }, 00:16:14.928 { 00:16:14.928 "method": "sock_impl_set_options", 00:16:14.928 "params": { 00:16:14.928 "impl_name": "posix", 00:16:14.928 "recv_buf_size": 2097152, 00:16:14.928 "send_buf_size": 2097152, 00:16:14.928 "enable_recv_pipe": true, 00:16:14.928 "enable_quickack": false, 00:16:14.928 "enable_placement_id": 0, 00:16:14.928 "enable_zerocopy_send_server": true, 00:16:14.928 "enable_zerocopy_send_client": false, 00:16:14.928 "zerocopy_threshold": 0, 00:16:14.928 "tls_version": 0, 00:16:14.928 "enable_ktls": false 00:16:14.928 } 00:16:14.928 }, 00:16:14.928 { 00:16:14.928 "method": "sock_impl_set_options", 00:16:14.928 "params": { 00:16:14.928 "impl_name": "uring", 00:16:14.928 "recv_buf_size": 2097152, 00:16:14.928 "send_buf_size": 2097152, 00:16:14.928 "enable_recv_pipe": true, 00:16:14.928 "enable_quickack": false, 00:16:14.928 "enable_placement_id": 0, 00:16:14.928 "enable_zerocopy_send_server": false, 00:16:14.928 "enable_zerocopy_send_client": false, 00:16:14.928 "zerocopy_threshold": 0, 00:16:14.928 "tls_version": 0, 00:16:14.928 "enable_ktls": false 00:16:14.928 } 00:16:14.928 } 00:16:14.928 ] 00:16:14.928 }, 00:16:14.928 { 00:16:14.928 "subsystem": "vmd", 00:16:14.928 "config": [] 00:16:14.928 }, 00:16:14.928 { 00:16:14.928 "subsystem": "accel", 00:16:14.928 "config": [ 00:16:14.928 { 00:16:14.928 "method": "accel_set_options", 00:16:14.928 "params": { 00:16:14.928 "small_cache_size": 128, 00:16:14.928 "large_cache_size": 16, 00:16:14.928 "task_count": 2048, 00:16:14.928 "sequence_count": 2048, 00:16:14.928 "buf_count": 2048 00:16:14.928 } 00:16:14.928 } 00:16:14.928 ] 00:16:14.928 }, 00:16:14.928 { 00:16:14.928 "subsystem": "bdev", 00:16:14.928 "config": [ 00:16:14.928 { 00:16:14.928 "method": "bdev_set_options", 00:16:14.928 "params": { 00:16:14.928 "bdev_io_pool_size": 65535, 00:16:14.928 "bdev_io_cache_size": 256, 00:16:14.928 "bdev_auto_examine": true, 00:16:14.928 "iobuf_small_cache_size": 128, 00:16:14.928 "iobuf_large_cache_size": 16 00:16:14.928 } 00:16:14.928 }, 00:16:14.928 { 00:16:14.928 "method": "bdev_raid_set_options", 00:16:14.928 "params": { 00:16:14.928 "process_window_size_kb": 1024, 00:16:14.928 "process_max_bandwidth_mb_sec": 0 00:16:14.928 } 00:16:14.928 }, 00:16:14.928 { 00:16:14.928 "method": "bdev_iscsi_set_options", 00:16:14.928 "params": { 00:16:14.928 "timeout_sec": 30 00:16:14.928 } 00:16:14.928 }, 00:16:14.928 { 00:16:14.928 "method": "bdev_nvme_set_options", 00:16:14.928 "params": { 00:16:14.928 "action_on_timeout": "none", 00:16:14.928 "timeout_us": 0, 00:16:14.928 "timeout_admin_us": 0, 00:16:14.928 "keep_alive_timeout_ms": 10000, 00:16:14.928 "arbitration_burst": 0, 00:16:14.928 "low_priority_weight": 0, 00:16:14.928 "medium_priority_weight": 0, 00:16:14.928 "high_priority_weight": 0, 00:16:14.929 "nvme_adminq_poll_period_us": 10000, 00:16:14.929 "nvme_ioq_poll_period_us": 0, 00:16:14.929 "io_queue_requests": 512, 00:16:14.929 "delay_cmd_submit": true, 00:16:14.929 "transport_retry_count": 4, 00:16:14.929 "bdev_retry_count": 3, 00:16:14.929 "transport_ack_timeout": 0, 00:16:14.929 "ctrlr_loss_timeout_sec": 0, 00:16:14.929 "reconnect_delay_sec": 0, 00:16:14.929 "fast_io_fail_timeout_sec": 0, 00:16:14.929 "disable_auto_failback": false, 00:16:14.929 "generate_uuids": false, 00:16:14.929 "transport_tos": 0, 00:16:14.929 "nvme_error_stat": false, 00:16:14.929 "rdma_srq_size": 0, 00:16:14.929 "io_path_stat": false, 00:16:14.929 "allow_accel_sequence": false, 00:16:14.929 "rdma_max_cq_size": 0, 00:16:14.929 "rdma_cm_event_timeout_ms": 0, 00:16:14.929 "dhchap_digests": [ 00:16:14.929 "sha256", 00:16:14.929 "sha384", 00:16:14.929 "sha512" 00:16:14.929 ], 00:16:14.929 "dhchap_dhgroups": [ 00:16:14.929 "null", 00:16:14.929 "ffdhe2048", 00:16:14.929 "ffdhe3072", 00:16:14.929 "ffdhe4096", 00:16:14.929 "ffdhe6144", 00:16:14.929 "ffdhe8192" 00:16:14.929 ] 00:16:14.929 } 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "method": "bdev_nvme_attach_controller", 00:16:14.929 "params": { 00:16:14.929 "name": "nvme0", 00:16:14.929 "trtype": "TCP", 00:16:14.929 "adrfam": "IPv4", 00:16:14.929 "traddr": "10.0.0.3", 00:16:14.929 "trsvcid": "4420", 00:16:14.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.929 "prchk_reftag": false, 00:16:14.929 "prchk_guard": false, 00:16:14.929 "ctrlr_loss_timeout_sec": 0, 00:16:14.929 "reconnect_delay_sec": 0, 00:16:14.929 "fast_io_fail_timeout_sec": 0, 00:16:14.929 "psk": "key0", 00:16:14.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:14.929 "hdgst": false, 00:16:14.929 "ddgst": false, 00:16:14.929 "multipath": "multipath" 00:16:14.929 } 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "method": "bdev_nvme_set_hotplug", 00:16:14.929 "params": { 00:16:14.929 "period_us": 100000, 00:16:14.929 "enable": false 00:16:14.929 } 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "method": "bdev_enable_histogram", 00:16:14.929 "params": { 00:16:14.929 "name": "nvme0n1", 00:16:14.929 "enable": true 00:16:14.929 } 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "method": "bdev_wait_for_examine" 00:16:14.929 } 00:16:14.929 ] 00:16:14.929 }, 00:16:14.929 { 00:16:14.929 "subsystem": "nbd", 00:16:14.929 "config": [] 00:16:14.929 } 00:16:14.929 ] 00:16:14.929 }' 00:16:14.929 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.929 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:14.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:14.929 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.929 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:14.929 [2024-11-19 09:44:02.404292] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:14.929 [2024-11-19 09:44:02.405312] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72593 ] 00:16:15.187 [2024-11-19 09:44:02.554930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.187 [2024-11-19 09:44:02.621320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.187 [2024-11-19 09:44:02.761422] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:15.446 [2024-11-19 09:44:02.816101] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:16.013 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.013 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:16.013 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:16.013 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:16:16.271 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.271 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:16.271 Running I/O for 1 seconds... 00:16:17.459 3987.00 IOPS, 15.57 MiB/s 00:16:17.459 Latency(us) 00:16:17.459 [2024-11-19T09:44:05.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.459 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:17.459 Verification LBA range: start 0x0 length 0x2000 00:16:17.459 nvme0n1 : 1.02 4048.65 15.82 0.00 0.00 31312.48 6613.18 23592.96 00:16:17.459 [2024-11-19T09:44:05.082Z] =================================================================================================================== 00:16:17.459 [2024-11-19T09:44:05.082Z] Total : 4048.65 15.82 0.00 0.00 31312.48 6613.18 23592.96 00:16:17.459 { 00:16:17.459 "results": [ 00:16:17.459 { 00:16:17.459 "job": "nvme0n1", 00:16:17.459 "core_mask": "0x2", 00:16:17.460 "workload": "verify", 00:16:17.460 "status": "finished", 00:16:17.460 "verify_range": { 00:16:17.460 "start": 0, 00:16:17.460 "length": 8192 00:16:17.460 }, 00:16:17.460 "queue_depth": 128, 00:16:17.460 "io_size": 4096, 00:16:17.460 "runtime": 1.016388, 00:16:17.460 "iops": 4048.65071212962, 00:16:17.460 "mibps": 15.815041844256328, 00:16:17.460 "io_failed": 0, 00:16:17.460 "io_timeout": 0, 00:16:17.460 "avg_latency_us": 31312.48257682536, 00:16:17.460 "min_latency_us": 6613.178181818182, 00:16:17.460 "max_latency_us": 23592.96 00:16:17.460 } 00:16:17.460 ], 00:16:17.460 "core_count": 1 00:16:17.460 } 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:17.460 nvmf_trace.0 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72593 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72593 ']' 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72593 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72593 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:17.460 killing process with pid 72593 00:16:17.460 Received shutdown signal, test time was about 1.000000 seconds 00:16:17.460 00:16:17.460 Latency(us) 00:16:17.460 [2024-11-19T09:44:05.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.460 [2024-11-19T09:44:05.083Z] =================================================================================================================== 00:16:17.460 [2024-11-19T09:44:05.083Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72593' 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72593 00:16:17.460 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72593 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:17.719 rmmod nvme_tcp 00:16:17.719 rmmod nvme_fabrics 00:16:17.719 rmmod nvme_keyring 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72561 ']' 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72561 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72561 ']' 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72561 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72561 00:16:17.719 killing process with pid 72561 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72561' 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72561 00:16:17.719 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72561 00:16:17.996 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:17.996 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:17.996 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:17.996 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:16:17.996 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:17.996 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:16:17.996 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:16:17.996 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:17.996 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:17.996 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:17.996 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:17.996 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:17.996 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.o9WAgBK3V6 /tmp/tmp.3VVZKR698P /tmp/tmp.P63RF6VLQv 00:16:18.280 ************************************ 00:16:18.280 END TEST nvmf_tls 00:16:18.280 ************************************ 00:16:18.280 00:16:18.280 real 1m26.227s 00:16:18.280 user 2m19.998s 00:16:18.280 sys 0m27.361s 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:18.280 ************************************ 00:16:18.280 START TEST nvmf_fips 00:16:18.280 ************************************ 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:18.280 * Looking for test storage... 00:16:18.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:18.280 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:18.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.540 --rc genhtml_branch_coverage=1 00:16:18.540 --rc genhtml_function_coverage=1 00:16:18.540 --rc genhtml_legend=1 00:16:18.540 --rc geninfo_all_blocks=1 00:16:18.540 --rc geninfo_unexecuted_blocks=1 00:16:18.540 00:16:18.540 ' 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:18.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.540 --rc genhtml_branch_coverage=1 00:16:18.540 --rc genhtml_function_coverage=1 00:16:18.540 --rc genhtml_legend=1 00:16:18.540 --rc geninfo_all_blocks=1 00:16:18.540 --rc geninfo_unexecuted_blocks=1 00:16:18.540 00:16:18.540 ' 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:18.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.540 --rc genhtml_branch_coverage=1 00:16:18.540 --rc genhtml_function_coverage=1 00:16:18.540 --rc genhtml_legend=1 00:16:18.540 --rc geninfo_all_blocks=1 00:16:18.540 --rc geninfo_unexecuted_blocks=1 00:16:18.540 00:16:18.540 ' 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:18.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.540 --rc genhtml_branch_coverage=1 00:16:18.540 --rc genhtml_function_coverage=1 00:16:18.540 --rc genhtml_legend=1 00:16:18.540 --rc geninfo_all_blocks=1 00:16:18.540 --rc geninfo_unexecuted_blocks=1 00:16:18.540 00:16:18.540 ' 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:18.540 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.540 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:18.541 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:16:18.541 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:16:18.800 Error setting digest 00:16:18.800 40C2061D2F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:16:18.800 40C2061D2F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:18.800 Cannot find device "nvmf_init_br" 00:16:18.800 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:18.801 Cannot find device "nvmf_init_br2" 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:18.801 Cannot find device "nvmf_tgt_br" 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:18.801 Cannot find device "nvmf_tgt_br2" 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:18.801 Cannot find device "nvmf_init_br" 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:18.801 Cannot find device "nvmf_init_br2" 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:18.801 Cannot find device "nvmf_tgt_br" 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:18.801 Cannot find device "nvmf_tgt_br2" 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:18.801 Cannot find device "nvmf_br" 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:18.801 Cannot find device "nvmf_init_if" 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:18.801 Cannot find device "nvmf_init_if2" 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:18.801 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:18.801 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:18.801 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:19.059 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:19.060 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:19.060 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:16:19.060 00:16:19.060 --- 10.0.0.3 ping statistics --- 00:16:19.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.060 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:19.060 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:19.060 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:16:19.060 00:16:19.060 --- 10.0.0.4 ping statistics --- 00:16:19.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.060 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:19.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:16:19.060 00:16:19.060 --- 10.0.0.1 ping statistics --- 00:16:19.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.060 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:19.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:16:19.060 00:16:19.060 --- 10.0.0.2 ping statistics --- 00:16:19.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.060 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72909 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72909 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72909 ']' 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.060 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:19.319 [2024-11-19 09:44:06.689721] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:19.319 [2024-11-19 09:44:06.690033] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.319 [2024-11-19 09:44:06.843321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.319 [2024-11-19 09:44:06.907698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.319 [2024-11-19 09:44:06.907746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.319 [2024-11-19 09:44:06.907760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.319 [2024-11-19 09:44:06.907771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.319 [2024-11-19 09:44:06.907780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.319 [2024-11-19 09:44:06.908269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.578 [2024-11-19 09:44:06.966874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.fAE 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.fAE 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.fAE 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.fAE 00:16:20.144 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.710 [2024-11-19 09:44:08.045141] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.710 [2024-11-19 09:44:08.061115] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:20.710 [2024-11-19 09:44:08.061448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:20.710 malloc0 00:16:20.710 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:20.710 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72950 00:16:20.710 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:20.710 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72950 /var/tmp/bdevperf.sock 00:16:20.710 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72950 ']' 00:16:20.710 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:20.710 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.710 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:20.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:20.710 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.710 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:20.710 [2024-11-19 09:44:08.202582] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:20.710 [2024-11-19 09:44:08.203088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72950 ] 00:16:20.968 [2024-11-19 09:44:08.350195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.968 [2024-11-19 09:44:08.416440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.968 [2024-11-19 09:44:08.475495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:20.968 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.968 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:20.968 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.fAE 00:16:21.227 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:21.485 [2024-11-19 09:44:09.073401] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:21.744 TLSTESTn1 00:16:21.744 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:21.744 Running I/O for 10 seconds... 00:16:24.062 3895.00 IOPS, 15.21 MiB/s [2024-11-19T09:44:12.620Z] 4015.00 IOPS, 15.68 MiB/s [2024-11-19T09:44:13.556Z] 4053.00 IOPS, 15.83 MiB/s [2024-11-19T09:44:14.490Z] 4081.25 IOPS, 15.94 MiB/s [2024-11-19T09:44:15.425Z] 4085.40 IOPS, 15.96 MiB/s [2024-11-19T09:44:16.360Z] 4097.00 IOPS, 16.00 MiB/s [2024-11-19T09:44:17.295Z] 4101.86 IOPS, 16.02 MiB/s [2024-11-19T09:44:18.670Z] 4106.75 IOPS, 16.04 MiB/s [2024-11-19T09:44:19.604Z] 4105.11 IOPS, 16.04 MiB/s [2024-11-19T09:44:19.604Z] 4101.00 IOPS, 16.02 MiB/s 00:16:31.981 Latency(us) 00:16:31.981 [2024-11-19T09:44:19.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.981 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:31.981 Verification LBA range: start 0x0 length 0x2000 00:16:31.981 TLSTESTn1 : 10.02 4107.15 16.04 0.00 0.00 31111.04 4706.68 26929.34 00:16:31.981 [2024-11-19T09:44:19.604Z] =================================================================================================================== 00:16:31.981 [2024-11-19T09:44:19.604Z] Total : 4107.15 16.04 0.00 0.00 31111.04 4706.68 26929.34 00:16:31.981 { 00:16:31.981 "results": [ 00:16:31.981 { 00:16:31.981 "job": "TLSTESTn1", 00:16:31.981 "core_mask": "0x4", 00:16:31.981 "workload": "verify", 00:16:31.981 "status": "finished", 00:16:31.981 "verify_range": { 00:16:31.981 "start": 0, 00:16:31.981 "length": 8192 00:16:31.981 }, 00:16:31.981 "queue_depth": 128, 00:16:31.981 "io_size": 4096, 00:16:31.981 "runtime": 10.015954, 00:16:31.981 "iops": 4107.147456947187, 00:16:31.981 "mibps": 16.043544753699948, 00:16:31.981 "io_failed": 0, 00:16:31.981 "io_timeout": 0, 00:16:31.981 "avg_latency_us": 31111.042012609752, 00:16:31.981 "min_latency_us": 4706.676363636364, 00:16:31.981 "max_latency_us": 26929.33818181818 00:16:31.981 } 00:16:31.981 ], 00:16:31.981 "core_count": 1 00:16:31.981 } 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:31.981 nvmf_trace.0 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72950 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72950 ']' 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72950 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.981 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72950 00:16:31.981 killing process with pid 72950 00:16:31.981 Received shutdown signal, test time was about 10.000000 seconds 00:16:31.981 00:16:31.981 Latency(us) 00:16:31.982 [2024-11-19T09:44:19.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.982 [2024-11-19T09:44:19.605Z] =================================================================================================================== 00:16:31.982 [2024-11-19T09:44:19.605Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:31.982 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:31.982 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:31.982 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72950' 00:16:31.982 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72950 00:16:31.982 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72950 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:32.240 rmmod nvme_tcp 00:16:32.240 rmmod nvme_fabrics 00:16:32.240 rmmod nvme_keyring 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72909 ']' 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72909 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72909 ']' 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72909 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72909 00:16:32.240 killing process with pid 72909 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:32.240 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:32.241 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72909' 00:16:32.241 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72909 00:16:32.241 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72909 00:16:32.552 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:32.552 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:32.552 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:32.552 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:16:32.552 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:16:32.552 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:32.552 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:16:32.552 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:32.552 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:32.552 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:32.552 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:32.552 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:32.552 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:32.552 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:32.552 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:32.552 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:32.552 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:32.552 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:32.552 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:32.552 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:32.552 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.fAE 00:16:32.824 ************************************ 00:16:32.824 END TEST nvmf_fips 00:16:32.824 ************************************ 00:16:32.824 00:16:32.824 real 0m14.409s 00:16:32.824 user 0m19.713s 00:16:32.824 sys 0m5.656s 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:32.824 ************************************ 00:16:32.824 START TEST nvmf_control_msg_list 00:16:32.824 ************************************ 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:32.824 * Looking for test storage... 00:16:32.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:16:32.824 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:33.083 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:33.083 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.083 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.083 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.083 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.083 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.083 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:33.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.084 --rc genhtml_branch_coverage=1 00:16:33.084 --rc genhtml_function_coverage=1 00:16:33.084 --rc genhtml_legend=1 00:16:33.084 --rc geninfo_all_blocks=1 00:16:33.084 --rc geninfo_unexecuted_blocks=1 00:16:33.084 00:16:33.084 ' 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:33.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.084 --rc genhtml_branch_coverage=1 00:16:33.084 --rc genhtml_function_coverage=1 00:16:33.084 --rc genhtml_legend=1 00:16:33.084 --rc geninfo_all_blocks=1 00:16:33.084 --rc geninfo_unexecuted_blocks=1 00:16:33.084 00:16:33.084 ' 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:33.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.084 --rc genhtml_branch_coverage=1 00:16:33.084 --rc genhtml_function_coverage=1 00:16:33.084 --rc genhtml_legend=1 00:16:33.084 --rc geninfo_all_blocks=1 00:16:33.084 --rc geninfo_unexecuted_blocks=1 00:16:33.084 00:16:33.084 ' 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:33.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.084 --rc genhtml_branch_coverage=1 00:16:33.084 --rc genhtml_function_coverage=1 00:16:33.084 --rc genhtml_legend=1 00:16:33.084 --rc geninfo_all_blocks=1 00:16:33.084 --rc geninfo_unexecuted_blocks=1 00:16:33.084 00:16:33.084 ' 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:33.084 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.084 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:33.085 Cannot find device "nvmf_init_br" 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:33.085 Cannot find device "nvmf_init_br2" 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:33.085 Cannot find device "nvmf_tgt_br" 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:33.085 Cannot find device "nvmf_tgt_br2" 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:33.085 Cannot find device "nvmf_init_br" 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:33.085 Cannot find device "nvmf_init_br2" 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:33.085 Cannot find device "nvmf_tgt_br" 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:33.085 Cannot find device "nvmf_tgt_br2" 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:33.085 Cannot find device "nvmf_br" 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:33.085 Cannot find device "nvmf_init_if" 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:33.085 Cannot find device "nvmf_init_if2" 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:33.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:33.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:33.085 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:33.344 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:33.344 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:16:33.344 00:16:33.344 --- 10.0.0.3 ping statistics --- 00:16:33.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.344 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:33.344 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:33.344 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:16:33.344 00:16:33.344 --- 10.0.0.4 ping statistics --- 00:16:33.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.344 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:33.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:33.344 00:16:33.344 --- 10.0.0.1 ping statistics --- 00:16:33.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.344 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:33.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:16:33.344 00:16:33.344 --- 10.0.0.2 ping statistics --- 00:16:33.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.344 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73328 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73328 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73328 ']' 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.344 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:33.344 [2024-11-19 09:44:20.952181] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:33.344 [2024-11-19 09:44:20.952312] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.602 [2024-11-19 09:44:21.106516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.602 [2024-11-19 09:44:21.163981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.602 [2024-11-19 09:44:21.164292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.602 [2024-11-19 09:44:21.164400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.602 [2024-11-19 09:44:21.164506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.602 [2024-11-19 09:44:21.164589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.602 [2024-11-19 09:44:21.165153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.863 [2024-11-19 09:44:21.224958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:33.863 [2024-11-19 09:44:21.346065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:33.863 Malloc0 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:33.863 [2024-11-19 09:44:21.386286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73347 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73348 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73349 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:33.863 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73347 00:16:34.121 [2024-11-19 09:44:21.564602] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:34.121 [2024-11-19 09:44:21.575482] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:34.121 [2024-11-19 09:44:21.575687] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:35.057 Initializing NVMe Controllers 00:16:35.057 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:35.057 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:16:35.057 Initialization complete. Launching workers. 00:16:35.057 ======================================================== 00:16:35.057 Latency(us) 00:16:35.057 Device Information : IOPS MiB/s Average min max 00:16:35.057 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3298.00 12.88 302.79 131.95 873.61 00:16:35.057 ======================================================== 00:16:35.057 Total : 3298.00 12.88 302.79 131.95 873.61 00:16:35.057 00:16:35.057 Initializing NVMe Controllers 00:16:35.057 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:35.057 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:16:35.057 Initialization complete. Launching workers. 00:16:35.057 ======================================================== 00:16:35.057 Latency(us) 00:16:35.057 Device Information : IOPS MiB/s Average min max 00:16:35.057 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3338.00 13.04 299.26 185.32 888.87 00:16:35.057 ======================================================== 00:16:35.057 Total : 3338.00 13.04 299.26 185.32 888.87 00:16:35.057 00:16:35.057 Initializing NVMe Controllers 00:16:35.057 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:35.057 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:16:35.057 Initialization complete. Launching workers. 00:16:35.057 ======================================================== 00:16:35.057 Latency(us) 00:16:35.057 Device Information : IOPS MiB/s Average min max 00:16:35.057 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3337.00 13.04 299.27 183.94 890.45 00:16:35.057 ======================================================== 00:16:35.057 Total : 3337.00 13.04 299.27 183.94 890.45 00:16:35.057 00:16:35.057 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73348 00:16:35.057 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73349 00:16:35.057 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:35.057 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:16:35.057 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:35.057 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:16:35.057 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:35.057 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:16:35.057 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:35.057 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:35.057 rmmod nvme_tcp 00:16:35.057 rmmod nvme_fabrics 00:16:35.317 rmmod nvme_keyring 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73328 ']' 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73328 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73328 ']' 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73328 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73328 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:35.317 killing process with pid 73328 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73328' 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73328 00:16:35.317 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73328 00:16:35.577 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:35.577 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:35.577 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:35.577 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:16:35.577 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:16:35.577 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:35.577 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:16:35.577 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.577 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:35.577 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:35.577 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:35.577 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:35.577 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.577 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:35.577 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:35.577 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:35.577 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:35.577 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:35.577 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:35.577 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:35.577 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.577 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:16:35.836 00:16:35.836 real 0m2.965s 00:16:35.836 user 0m4.801s 00:16:35.836 sys 0m1.341s 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:35.836 ************************************ 00:16:35.836 END TEST nvmf_control_msg_list 00:16:35.836 ************************************ 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:35.836 ************************************ 00:16:35.836 START TEST nvmf_wait_for_buf 00:16:35.836 ************************************ 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:35.836 * Looking for test storage... 00:16:35.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:35.836 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:36.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.096 --rc genhtml_branch_coverage=1 00:16:36.096 --rc genhtml_function_coverage=1 00:16:36.096 --rc genhtml_legend=1 00:16:36.096 --rc geninfo_all_blocks=1 00:16:36.096 --rc geninfo_unexecuted_blocks=1 00:16:36.096 00:16:36.096 ' 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:36.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.096 --rc genhtml_branch_coverage=1 00:16:36.096 --rc genhtml_function_coverage=1 00:16:36.096 --rc genhtml_legend=1 00:16:36.096 --rc geninfo_all_blocks=1 00:16:36.096 --rc geninfo_unexecuted_blocks=1 00:16:36.096 00:16:36.096 ' 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:36.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.096 --rc genhtml_branch_coverage=1 00:16:36.096 --rc genhtml_function_coverage=1 00:16:36.096 --rc genhtml_legend=1 00:16:36.096 --rc geninfo_all_blocks=1 00:16:36.096 --rc geninfo_unexecuted_blocks=1 00:16:36.096 00:16:36.096 ' 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:36.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.096 --rc genhtml_branch_coverage=1 00:16:36.096 --rc genhtml_function_coverage=1 00:16:36.096 --rc genhtml_legend=1 00:16:36.096 --rc geninfo_all_blocks=1 00:16:36.096 --rc geninfo_unexecuted_blocks=1 00:16:36.096 00:16:36.096 ' 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.096 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:36.097 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:36.097 Cannot find device "nvmf_init_br" 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:36.097 Cannot find device "nvmf_init_br2" 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:36.097 Cannot find device "nvmf_tgt_br" 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:36.097 Cannot find device "nvmf_tgt_br2" 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:36.097 Cannot find device "nvmf_init_br" 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:36.097 Cannot find device "nvmf_init_br2" 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:36.097 Cannot find device "nvmf_tgt_br" 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:36.097 Cannot find device "nvmf_tgt_br2" 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:36.097 Cannot find device "nvmf_br" 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:36.097 Cannot find device "nvmf_init_if" 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:36.097 Cannot find device "nvmf_init_if2" 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:16:36.097 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:36.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.098 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:16:36.098 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:36.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.098 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:16:36.098 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:36.098 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:36.098 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:36.098 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:36.098 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:36.357 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:36.357 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:16:36.357 00:16:36.357 --- 10.0.0.3 ping statistics --- 00:16:36.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.357 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:36.357 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:36.357 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:16:36.357 00:16:36.357 --- 10.0.0.4 ping statistics --- 00:16:36.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.357 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:36.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:36.357 00:16:36.357 --- 10.0.0.1 ping statistics --- 00:16:36.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.357 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:36.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:16:36.357 00:16:36.357 --- 10.0.0.2 ping statistics --- 00:16:36.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.357 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.357 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:36.358 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73587 00:16:36.358 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:36.358 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73587 00:16:36.358 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73587 ']' 00:16:36.358 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.358 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.358 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.358 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.358 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:36.617 [2024-11-19 09:44:23.990920] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:36.617 [2024-11-19 09:44:23.991016] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.617 [2024-11-19 09:44:24.191390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.876 [2024-11-19 09:44:24.249195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.876 [2024-11-19 09:44:24.249249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.876 [2024-11-19 09:44:24.249260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.876 [2024-11-19 09:44:24.249269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.876 [2024-11-19 09:44:24.249276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.876 [2024-11-19 09:44:24.249669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.446 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.446 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:16:37.446 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:37.446 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:37.446 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:37.704 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.704 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:37.704 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:37.704 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:16:37.704 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.704 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:37.704 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:37.705 [2024-11-19 09:44:25.140843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:37.705 Malloc0 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:37.705 [2024-11-19 09:44:25.198439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:37.705 [2024-11-19 09:44:25.222547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.705 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:37.963 [2024-11-19 09:44:25.436323] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:39.341 Initializing NVMe Controllers 00:16:39.341 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:39.341 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:16:39.341 Initialization complete. Launching workers. 00:16:39.341 ======================================================== 00:16:39.341 Latency(us) 00:16:39.341 Device Information : IOPS MiB/s Average min max 00:16:39.341 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.99 62.50 8000.26 4298.71 11994.87 00:16:39.341 ======================================================== 00:16:39.341 Total : 499.99 62.50 8000.26 4298.71 11994.87 00:16:39.341 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:39.341 rmmod nvme_tcp 00:16:39.341 rmmod nvme_fabrics 00:16:39.341 rmmod nvme_keyring 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:39.341 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:16:39.342 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:16:39.342 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73587 ']' 00:16:39.342 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73587 00:16:39.342 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73587 ']' 00:16:39.342 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73587 00:16:39.342 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:16:39.342 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.342 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73587 00:16:39.342 killing process with pid 73587 00:16:39.342 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.342 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.342 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73587' 00:16:39.342 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73587 00:16:39.342 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73587 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:39.600 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:39.858 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:39.858 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:39.858 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.858 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.858 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:39.858 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.858 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.858 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.858 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:16:39.858 00:16:39.858 real 0m4.037s 00:16:39.858 user 0m3.635s 00:16:39.858 sys 0m0.822s 00:16:39.859 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.859 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:39.859 ************************************ 00:16:39.859 END TEST nvmf_wait_for_buf 00:16:39.859 ************************************ 00:16:39.859 09:44:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:16:39.859 09:44:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:16:39.859 09:44:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:39.859 09:44:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:39.859 09:44:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.859 09:44:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:39.859 ************************************ 00:16:39.859 START TEST nvmf_nsid 00:16:39.859 ************************************ 00:16:39.859 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:39.859 * Looking for test storage... 00:16:39.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:39.859 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:39.859 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:39.859 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:40.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.118 --rc genhtml_branch_coverage=1 00:16:40.118 --rc genhtml_function_coverage=1 00:16:40.118 --rc genhtml_legend=1 00:16:40.118 --rc geninfo_all_blocks=1 00:16:40.118 --rc geninfo_unexecuted_blocks=1 00:16:40.118 00:16:40.118 ' 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:40.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.118 --rc genhtml_branch_coverage=1 00:16:40.118 --rc genhtml_function_coverage=1 00:16:40.118 --rc genhtml_legend=1 00:16:40.118 --rc geninfo_all_blocks=1 00:16:40.118 --rc geninfo_unexecuted_blocks=1 00:16:40.118 00:16:40.118 ' 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:40.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.118 --rc genhtml_branch_coverage=1 00:16:40.118 --rc genhtml_function_coverage=1 00:16:40.118 --rc genhtml_legend=1 00:16:40.118 --rc geninfo_all_blocks=1 00:16:40.118 --rc geninfo_unexecuted_blocks=1 00:16:40.118 00:16:40.118 ' 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:40.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.118 --rc genhtml_branch_coverage=1 00:16:40.118 --rc genhtml_function_coverage=1 00:16:40.118 --rc genhtml_legend=1 00:16:40.118 --rc geninfo_all_blocks=1 00:16:40.118 --rc geninfo_unexecuted_blocks=1 00:16:40.118 00:16:40.118 ' 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.118 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:40.119 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:40.119 Cannot find device "nvmf_init_br" 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:40.119 Cannot find device "nvmf_init_br2" 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:40.119 Cannot find device "nvmf_tgt_br" 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:40.119 Cannot find device "nvmf_tgt_br2" 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:40.119 Cannot find device "nvmf_init_br" 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:40.119 Cannot find device "nvmf_init_br2" 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:40.119 Cannot find device "nvmf_tgt_br" 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:40.119 Cannot find device "nvmf_tgt_br2" 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:40.119 Cannot find device "nvmf_br" 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:40.119 Cannot find device "nvmf_init_if" 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:40.119 Cannot find device "nvmf_init_if2" 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:16:40.119 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:40.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:40.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:40.378 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:40.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:40.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:16:40.378 00:16:40.378 --- 10.0.0.3 ping statistics --- 00:16:40.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.379 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:40.379 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:40.379 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:16:40.379 00:16:40.379 --- 10.0.0.4 ping statistics --- 00:16:40.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.379 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:40.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:40.379 00:16:40.379 --- 10.0.0.1 ping statistics --- 00:16:40.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.379 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:40.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:16:40.379 00:16:40.379 --- 10.0.0.2 ping statistics --- 00:16:40.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.379 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73865 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73865 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73865 ']' 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.379 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:40.637 [2024-11-19 09:44:28.053210] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:40.638 [2024-11-19 09:44:28.053315] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.638 [2024-11-19 09:44:28.203185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.897 [2024-11-19 09:44:28.269944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.897 [2024-11-19 09:44:28.270001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.897 [2024-11-19 09:44:28.270016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.897 [2024-11-19 09:44:28.270026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.897 [2024-11-19 09:44:28.270035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.897 [2024-11-19 09:44:28.270540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.897 [2024-11-19 09:44:28.331382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73884 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=e19dad58-ddf0-4633-8af8-4a0ff59e7d30 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=75d5ea1c-37ee-4f39-93a7-1523f37e9d74 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=891d9fe2-155e-406d-a891-69102517a13c 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.897 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:40.897 null0 00:16:40.897 null1 00:16:40.897 null2 00:16:40.898 [2024-11-19 09:44:28.500842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.898 [2024-11-19 09:44:28.513844] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:40.898 [2024-11-19 09:44:28.513943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73884 ] 00:16:41.156 [2024-11-19 09:44:28.524927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:41.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:16:41.156 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.156 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73884 /var/tmp/tgt2.sock 00:16:41.156 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73884 ']' 00:16:41.156 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:16:41.156 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.156 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:16:41.156 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.156 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:41.156 [2024-11-19 09:44:28.667707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.156 [2024-11-19 09:44:28.736868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.417 [2024-11-19 09:44:28.811177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:41.417 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.417 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:41.417 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:16:41.999 [2024-11-19 09:44:29.467971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.999 [2024-11-19 09:44:29.484023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:16:41.999 nvme0n1 nvme0n2 00:16:41.999 nvme1n1 00:16:41.999 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:16:41.999 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:16:41.999 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:42.258 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:16:42.258 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:16:42.258 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:16:42.258 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:16:42.258 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:16:42.258 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:16:42.258 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:16:42.258 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:42.258 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:42.258 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:42.258 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:16:42.258 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:16:42.258 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:16:43.191 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:43.191 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid e19dad58-ddf0-4633-8af8-4a0ff59e7d30 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e19dad58ddf046338af84a0ff59e7d30 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E19DAD58DDF046338AF84A0FF59E7D30 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ E19DAD58DDF046338AF84A0FF59E7D30 == \E\1\9\D\A\D\5\8\D\D\F\0\4\6\3\3\8\A\F\8\4\A\0\F\F\5\9\E\7\D\3\0 ]] 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 75d5ea1c-37ee-4f39-93a7-1523f37e9d74 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:16:43.192 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=75d5ea1c37ee4f3993a71523f37e9d74 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 75D5EA1C37EE4F3993A71523F37E9D74 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 75D5EA1C37EE4F3993A71523F37E9D74 == \7\5\D\5\E\A\1\C\3\7\E\E\4\F\3\9\9\3\A\7\1\5\2\3\F\3\7\E\9\D\7\4 ]] 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 891d9fe2-155e-406d-a891-69102517a13c 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:16:43.450 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:16:43.451 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:43.451 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=891d9fe2155e406da89169102517a13c 00:16:43.451 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 891D9FE2155E406DA89169102517A13C 00:16:43.451 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 891D9FE2155E406DA89169102517A13C == \8\9\1\D\9\F\E\2\1\5\5\E\4\0\6\D\A\8\9\1\6\9\1\0\2\5\1\7\A\1\3\C ]] 00:16:43.451 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:16:43.709 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:16:43.709 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:16:43.709 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73884 00:16:43.709 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73884 ']' 00:16:43.709 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73884 00:16:43.709 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:43.709 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.709 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73884 00:16:43.709 killing process with pid 73884 00:16:43.709 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:43.709 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:43.709 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73884' 00:16:43.709 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73884 00:16:43.709 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73884 00:16:43.968 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:16:43.968 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:43.968 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:16:44.226 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:44.227 rmmod nvme_tcp 00:16:44.227 rmmod nvme_fabrics 00:16:44.227 rmmod nvme_keyring 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73865 ']' 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73865 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73865 ']' 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73865 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73865 00:16:44.227 killing process with pid 73865 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73865' 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73865 00:16:44.227 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73865 00:16:44.485 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:44.485 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:44.485 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:44.485 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:16:44.485 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:16:44.485 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:44.485 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:16:44.485 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:44.485 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:44.485 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:44.485 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:44.485 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:44.486 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:44.486 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:44.486 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:44.486 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:44.486 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:44.486 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:44.486 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:44.486 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:44.486 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.486 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.744 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:44.744 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.744 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.744 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.744 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:16:44.744 00:16:44.744 real 0m4.772s 00:16:44.744 user 0m7.088s 00:16:44.744 sys 0m1.738s 00:16:44.744 ************************************ 00:16:44.744 END TEST nvmf_nsid 00:16:44.744 ************************************ 00:16:44.744 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.744 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:44.744 09:44:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:44.744 00:16:44.744 real 5m15.251s 00:16:44.744 user 11m4.575s 00:16:44.744 sys 1m8.598s 00:16:44.744 09:44:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.744 09:44:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:44.744 ************************************ 00:16:44.744 END TEST nvmf_target_extra 00:16:44.744 ************************************ 00:16:44.744 09:44:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:44.744 09:44:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:44.744 09:44:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.745 09:44:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:44.745 ************************************ 00:16:44.745 START TEST nvmf_host 00:16:44.745 ************************************ 00:16:44.745 09:44:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:44.745 * Looking for test storage... 00:16:44.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:44.745 09:44:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:44.745 09:44:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:16:44.745 09:44:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:45.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.004 --rc genhtml_branch_coverage=1 00:16:45.004 --rc genhtml_function_coverage=1 00:16:45.004 --rc genhtml_legend=1 00:16:45.004 --rc geninfo_all_blocks=1 00:16:45.004 --rc geninfo_unexecuted_blocks=1 00:16:45.004 00:16:45.004 ' 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:45.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.004 --rc genhtml_branch_coverage=1 00:16:45.004 --rc genhtml_function_coverage=1 00:16:45.004 --rc genhtml_legend=1 00:16:45.004 --rc geninfo_all_blocks=1 00:16:45.004 --rc geninfo_unexecuted_blocks=1 00:16:45.004 00:16:45.004 ' 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:45.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.004 --rc genhtml_branch_coverage=1 00:16:45.004 --rc genhtml_function_coverage=1 00:16:45.004 --rc genhtml_legend=1 00:16:45.004 --rc geninfo_all_blocks=1 00:16:45.004 --rc geninfo_unexecuted_blocks=1 00:16:45.004 00:16:45.004 ' 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:45.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.004 --rc genhtml_branch_coverage=1 00:16:45.004 --rc genhtml_function_coverage=1 00:16:45.004 --rc genhtml_legend=1 00:16:45.004 --rc geninfo_all_blocks=1 00:16:45.004 --rc geninfo_unexecuted_blocks=1 00:16:45.004 00:16:45.004 ' 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:16:45.004 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:45.005 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.005 ************************************ 00:16:45.005 START TEST nvmf_identify 00:16:45.005 ************************************ 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:45.005 * Looking for test storage... 00:16:45.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:16:45.005 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:45.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.264 --rc genhtml_branch_coverage=1 00:16:45.264 --rc genhtml_function_coverage=1 00:16:45.264 --rc genhtml_legend=1 00:16:45.264 --rc geninfo_all_blocks=1 00:16:45.264 --rc geninfo_unexecuted_blocks=1 00:16:45.264 00:16:45.264 ' 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:45.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.264 --rc genhtml_branch_coverage=1 00:16:45.264 --rc genhtml_function_coverage=1 00:16:45.264 --rc genhtml_legend=1 00:16:45.264 --rc geninfo_all_blocks=1 00:16:45.264 --rc geninfo_unexecuted_blocks=1 00:16:45.264 00:16:45.264 ' 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:45.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.264 --rc genhtml_branch_coverage=1 00:16:45.264 --rc genhtml_function_coverage=1 00:16:45.264 --rc genhtml_legend=1 00:16:45.264 --rc geninfo_all_blocks=1 00:16:45.264 --rc geninfo_unexecuted_blocks=1 00:16:45.264 00:16:45.264 ' 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:45.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.264 --rc genhtml_branch_coverage=1 00:16:45.264 --rc genhtml_function_coverage=1 00:16:45.264 --rc genhtml_legend=1 00:16:45.264 --rc geninfo_all_blocks=1 00:16:45.264 --rc geninfo_unexecuted_blocks=1 00:16:45.264 00:16:45.264 ' 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.264 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:45.265 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:45.265 Cannot find device "nvmf_init_br" 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:45.265 Cannot find device "nvmf_init_br2" 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:45.265 Cannot find device "nvmf_tgt_br" 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:45.265 Cannot find device "nvmf_tgt_br2" 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:45.265 Cannot find device "nvmf_init_br" 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:45.265 Cannot find device "nvmf_init_br2" 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:45.265 Cannot find device "nvmf_tgt_br" 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:45.265 Cannot find device "nvmf_tgt_br2" 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:45.265 Cannot find device "nvmf_br" 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:45.265 Cannot find device "nvmf_init_if" 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:45.265 Cannot find device "nvmf_init_if2" 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:45.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:45.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:45.265 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:45.524 09:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:45.524 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:45.524 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:16:45.524 00:16:45.524 --- 10.0.0.3 ping statistics --- 00:16:45.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.524 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:45.524 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:45.524 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:16:45.524 00:16:45.524 --- 10.0.0.4 ping statistics --- 00:16:45.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.524 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:45.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:45.524 00:16:45.524 --- 10.0.0.1 ping statistics --- 00:16:45.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.524 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:45.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:16:45.524 00:16:45.524 --- 10.0.0.2 ping statistics --- 00:16:45.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.524 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74246 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:45.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74246 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74246 ']' 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.524 09:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:45.837 [2024-11-19 09:44:33.170199] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:45.837 [2024-11-19 09:44:33.170605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.837 [2024-11-19 09:44:33.327928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.837 [2024-11-19 09:44:33.401035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.837 [2024-11-19 09:44:33.401307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.837 [2024-11-19 09:44:33.401468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.837 [2024-11-19 09:44:33.401710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.837 [2024-11-19 09:44:33.401913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.837 [2024-11-19 09:44:33.403370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.837 [2024-11-19 09:44:33.403459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.837 [2024-11-19 09:44:33.403547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.837 [2024-11-19 09:44:33.403548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.094 [2024-11-19 09:44:33.462775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:46.659 [2024-11-19 09:44:34.139696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:46.659 Malloc0 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:46.659 [2024-11-19 09:44:34.256990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.659 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:46.659 [ 00:16:46.659 { 00:16:46.659 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:46.659 "subtype": "Discovery", 00:16:46.659 "listen_addresses": [ 00:16:46.659 { 00:16:46.659 "trtype": "TCP", 00:16:46.659 "adrfam": "IPv4", 00:16:46.659 "traddr": "10.0.0.3", 00:16:46.659 "trsvcid": "4420" 00:16:46.659 } 00:16:46.659 ], 00:16:46.659 "allow_any_host": true, 00:16:46.659 "hosts": [] 00:16:46.659 }, 00:16:46.659 { 00:16:46.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:46.659 "subtype": "NVMe", 00:16:46.918 "listen_addresses": [ 00:16:46.918 { 00:16:46.918 "trtype": "TCP", 00:16:46.918 "adrfam": "IPv4", 00:16:46.918 "traddr": "10.0.0.3", 00:16:46.918 "trsvcid": "4420" 00:16:46.918 } 00:16:46.918 ], 00:16:46.918 "allow_any_host": true, 00:16:46.918 "hosts": [], 00:16:46.918 "serial_number": "SPDK00000000000001", 00:16:46.918 "model_number": "SPDK bdev Controller", 00:16:46.918 "max_namespaces": 32, 00:16:46.918 "min_cntlid": 1, 00:16:46.918 "max_cntlid": 65519, 00:16:46.918 "namespaces": [ 00:16:46.918 { 00:16:46.918 "nsid": 1, 00:16:46.918 "bdev_name": "Malloc0", 00:16:46.918 "name": "Malloc0", 00:16:46.918 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:46.918 "eui64": "ABCDEF0123456789", 00:16:46.918 "uuid": "c9ff6d93-e070-4efb-aef0-051de82b3462" 00:16:46.918 } 00:16:46.918 ] 00:16:46.918 } 00:16:46.918 ] 00:16:46.919 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.919 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:46.919 [2024-11-19 09:44:34.312053] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:46.919 [2024-11-19 09:44:34.312258] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74281 ] 00:16:46.919 [2024-11-19 09:44:34.469489] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:16:46.919 [2024-11-19 09:44:34.469562] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:46.919 [2024-11-19 09:44:34.469569] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:46.919 [2024-11-19 09:44:34.469584] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:46.919 [2024-11-19 09:44:34.469595] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:46.919 [2024-11-19 09:44:34.469961] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:16:46.919 [2024-11-19 09:44:34.470037] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1519750 0 00:16:46.919 [2024-11-19 09:44:34.484259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:46.919 [2024-11-19 09:44:34.484285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:46.919 [2024-11-19 09:44:34.484292] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:46.919 [2024-11-19 09:44:34.484296] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:46.919 [2024-11-19 09:44:34.484329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.484340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.484344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1519750) 00:16:46.919 [2024-11-19 09:44:34.484360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:46.919 [2024-11-19 09:44:34.484403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157d740, cid 0, qid 0 00:16:46.919 [2024-11-19 09:44:34.492304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.919 [2024-11-19 09:44:34.492330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.919 [2024-11-19 09:44:34.492336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.492342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157d740) on tqpair=0x1519750 00:16:46.919 [2024-11-19 09:44:34.492356] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:46.919 [2024-11-19 09:44:34.492366] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:16:46.919 [2024-11-19 09:44:34.492372] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:16:46.919 [2024-11-19 09:44:34.492390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.492396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.492400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1519750) 00:16:46.919 [2024-11-19 09:44:34.492410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.919 [2024-11-19 09:44:34.492440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157d740, cid 0, qid 0 00:16:46.919 [2024-11-19 09:44:34.492503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.919 [2024-11-19 09:44:34.492510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.919 [2024-11-19 09:44:34.492514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.492519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157d740) on tqpair=0x1519750 00:16:46.919 [2024-11-19 09:44:34.492525] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:16:46.919 [2024-11-19 09:44:34.492533] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:16:46.919 [2024-11-19 09:44:34.492541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.492546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.492550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1519750) 00:16:46.919 [2024-11-19 09:44:34.492558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.919 [2024-11-19 09:44:34.492578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157d740, cid 0, qid 0 00:16:46.919 [2024-11-19 09:44:34.492628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.919 [2024-11-19 09:44:34.492636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.919 [2024-11-19 09:44:34.492639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.492644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157d740) on tqpair=0x1519750 00:16:46.919 [2024-11-19 09:44:34.492650] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:16:46.919 [2024-11-19 09:44:34.492658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:46.919 [2024-11-19 09:44:34.492666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.492671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.492675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1519750) 00:16:46.919 [2024-11-19 09:44:34.492683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.919 [2024-11-19 09:44:34.492701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157d740, cid 0, qid 0 00:16:46.919 [2024-11-19 09:44:34.492744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.919 [2024-11-19 09:44:34.492751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.919 [2024-11-19 09:44:34.492755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.492759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157d740) on tqpair=0x1519750 00:16:46.919 [2024-11-19 09:44:34.492765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:46.919 [2024-11-19 09:44:34.492775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.492781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.492785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1519750) 00:16:46.919 [2024-11-19 09:44:34.492792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.919 [2024-11-19 09:44:34.492809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157d740, cid 0, qid 0 00:16:46.919 [2024-11-19 09:44:34.492853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.919 [2024-11-19 09:44:34.492860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.919 [2024-11-19 09:44:34.492864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.492868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157d740) on tqpair=0x1519750 00:16:46.919 [2024-11-19 09:44:34.492884] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:46.919 [2024-11-19 09:44:34.492889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:46.919 [2024-11-19 09:44:34.492897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:46.919 [2024-11-19 09:44:34.493009] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:16:46.919 [2024-11-19 09:44:34.493016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:46.919 [2024-11-19 09:44:34.493026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.493031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.493035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1519750) 00:16:46.919 [2024-11-19 09:44:34.493043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.919 [2024-11-19 09:44:34.493062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157d740, cid 0, qid 0 00:16:46.919 [2024-11-19 09:44:34.493106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.919 [2024-11-19 09:44:34.493113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.919 [2024-11-19 09:44:34.493117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.493121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157d740) on tqpair=0x1519750 00:16:46.919 [2024-11-19 09:44:34.493126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:46.919 [2024-11-19 09:44:34.493137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.493142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.493146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1519750) 00:16:46.919 [2024-11-19 09:44:34.493153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.919 [2024-11-19 09:44:34.493172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157d740, cid 0, qid 0 00:16:46.919 [2024-11-19 09:44:34.493227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.919 [2024-11-19 09:44:34.493236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.919 [2024-11-19 09:44:34.493240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.919 [2024-11-19 09:44:34.493244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157d740) on tqpair=0x1519750 00:16:46.919 [2024-11-19 09:44:34.493249] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:46.920 [2024-11-19 09:44:34.493255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:46.920 [2024-11-19 09:44:34.493264] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:16:46.920 [2024-11-19 09:44:34.493298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:46.920 [2024-11-19 09:44:34.493309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1519750) 00:16:46.920 [2024-11-19 09:44:34.493322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.920 [2024-11-19 09:44:34.493343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157d740, cid 0, qid 0 00:16:46.920 [2024-11-19 09:44:34.493444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:46.920 [2024-11-19 09:44:34.493451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:46.920 [2024-11-19 09:44:34.493455] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493460] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1519750): datao=0, datal=4096, cccid=0 00:16:46.920 [2024-11-19 09:44:34.493465] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x157d740) on tqpair(0x1519750): expected_datao=0, payload_size=4096 00:16:46.920 [2024-11-19 09:44:34.493470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493478] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493483] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.920 [2024-11-19 09:44:34.493498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.920 [2024-11-19 09:44:34.493502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493506] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157d740) on tqpair=0x1519750 00:16:46.920 [2024-11-19 09:44:34.493515] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:16:46.920 [2024-11-19 09:44:34.493521] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:16:46.920 [2024-11-19 09:44:34.493526] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:16:46.920 [2024-11-19 09:44:34.493531] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:16:46.920 [2024-11-19 09:44:34.493536] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:16:46.920 [2024-11-19 09:44:34.493542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:16:46.920 [2024-11-19 09:44:34.493556] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:46.920 [2024-11-19 09:44:34.493565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1519750) 00:16:46.920 [2024-11-19 09:44:34.493581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.920 [2024-11-19 09:44:34.493601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157d740, cid 0, qid 0 00:16:46.920 [2024-11-19 09:44:34.493657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.920 [2024-11-19 09:44:34.493664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.920 [2024-11-19 09:44:34.493667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157d740) on tqpair=0x1519750 00:16:46.920 [2024-11-19 09:44:34.493680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1519750) 00:16:46.920 [2024-11-19 09:44:34.493695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.920 [2024-11-19 09:44:34.493702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1519750) 00:16:46.920 [2024-11-19 09:44:34.493716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.920 [2024-11-19 09:44:34.493726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1519750) 00:16:46.920 [2024-11-19 09:44:34.493748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.920 [2024-11-19 09:44:34.493758] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1519750) 00:16:46.920 [2024-11-19 09:44:34.493780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.920 [2024-11-19 09:44:34.493790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:46.920 [2024-11-19 09:44:34.493815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:46.920 [2024-11-19 09:44:34.493828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.493835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1519750) 00:16:46.920 [2024-11-19 09:44:34.493845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.920 [2024-11-19 09:44:34.493881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157d740, cid 0, qid 0 00:16:46.920 [2024-11-19 09:44:34.493893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157d8c0, cid 1, qid 0 00:16:46.920 [2024-11-19 09:44:34.493899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157da40, cid 2, qid 0 00:16:46.920 [2024-11-19 09:44:34.493904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dbc0, cid 3, qid 0 00:16:46.920 [2024-11-19 09:44:34.493909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dd40, cid 4, qid 0 00:16:46.920 [2024-11-19 09:44:34.493985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.920 [2024-11-19 09:44:34.493992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.920 [2024-11-19 09:44:34.493996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.494001] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dd40) on tqpair=0x1519750 00:16:46.920 [2024-11-19 09:44:34.494007] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:16:46.920 [2024-11-19 09:44:34.494013] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:16:46.920 [2024-11-19 09:44:34.494026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.494031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1519750) 00:16:46.920 [2024-11-19 09:44:34.494039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.920 [2024-11-19 09:44:34.494058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dd40, cid 4, qid 0 00:16:46.920 [2024-11-19 09:44:34.494121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:46.920 [2024-11-19 09:44:34.494128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:46.920 [2024-11-19 09:44:34.494132] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.494136] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1519750): datao=0, datal=4096, cccid=4 00:16:46.920 [2024-11-19 09:44:34.494141] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x157dd40) on tqpair(0x1519750): expected_datao=0, payload_size=4096 00:16:46.920 [2024-11-19 09:44:34.494146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.494154] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.494158] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.494167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.920 [2024-11-19 09:44:34.494173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.920 [2024-11-19 09:44:34.494177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.494181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dd40) on tqpair=0x1519750 00:16:46.920 [2024-11-19 09:44:34.494196] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:16:46.920 [2024-11-19 09:44:34.494251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.494259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1519750) 00:16:46.920 [2024-11-19 09:44:34.494267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.920 [2024-11-19 09:44:34.494276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.494280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.920 [2024-11-19 09:44:34.494284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1519750) 00:16:46.920 [2024-11-19 09:44:34.494291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.920 [2024-11-19 09:44:34.494319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dd40, cid 4, qid 0 00:16:46.920 [2024-11-19 09:44:34.494326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dec0, cid 5, qid 0 00:16:46.920 [2024-11-19 09:44:34.494445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:46.920 [2024-11-19 09:44:34.494453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:46.921 [2024-11-19 09:44:34.494457] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494461] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1519750): datao=0, datal=1024, cccid=4 00:16:46.921 [2024-11-19 09:44:34.494466] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x157dd40) on tqpair(0x1519750): expected_datao=0, payload_size=1024 00:16:46.921 [2024-11-19 09:44:34.494470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494478] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494482] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.921 [2024-11-19 09:44:34.494494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.921 [2024-11-19 09:44:34.494498] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dec0) on tqpair=0x1519750 00:16:46.921 [2024-11-19 09:44:34.494520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.921 [2024-11-19 09:44:34.494528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.921 [2024-11-19 09:44:34.494532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dd40) on tqpair=0x1519750 00:16:46.921 [2024-11-19 09:44:34.494571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1519750) 00:16:46.921 [2024-11-19 09:44:34.494589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.921 [2024-11-19 09:44:34.494620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dd40, cid 4, qid 0 00:16:46.921 [2024-11-19 09:44:34.494695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:46.921 [2024-11-19 09:44:34.494703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:46.921 [2024-11-19 09:44:34.494707] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494711] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1519750): datao=0, datal=3072, cccid=4 00:16:46.921 [2024-11-19 09:44:34.494716] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x157dd40) on tqpair(0x1519750): expected_datao=0, payload_size=3072 00:16:46.921 [2024-11-19 09:44:34.494720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494728] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494732] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.921 [2024-11-19 09:44:34.494747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.921 [2024-11-19 09:44:34.494751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dd40) on tqpair=0x1519750 00:16:46.921 [2024-11-19 09:44:34.494765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1519750) 00:16:46.921 [2024-11-19 09:44:34.494777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.921 [2024-11-19 09:44:34.494801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dd40, cid 4, qid 0 00:16:46.921 [2024-11-19 09:44:34.494862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:46.921 [2024-11-19 09:44:34.494869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:46.921 [2024-11-19 09:44:34.494872] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494876] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1519750): datao=0, datal=8, cccid=4 00:16:46.921 [2024-11-19 09:44:34.494881] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x157dd40) on tqpair(0x1519750): expected_datao=0, payload_size=8 00:16:46.921 [2024-11-19 09:44:34.494886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494893] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494904] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.921 [2024-11-19 09:44:34.494928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.921 [2024-11-19 09:44:34.494931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.921 [2024-11-19 09:44:34.494935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dd40) on tqpair=0x1519750 00:16:46.921 ===================================================== 00:16:46.921 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:46.921 ===================================================== 00:16:46.921 Controller Capabilities/Features 00:16:46.921 ================================ 00:16:46.921 Vendor ID: 0000 00:16:46.921 Subsystem Vendor ID: 0000 00:16:46.921 Serial Number: .................... 00:16:46.921 Model Number: ........................................ 00:16:46.921 Firmware Version: 25.01 00:16:46.921 Recommended Arb Burst: 0 00:16:46.921 IEEE OUI Identifier: 00 00 00 00:16:46.921 Multi-path I/O 00:16:46.921 May have multiple subsystem ports: No 00:16:46.921 May have multiple controllers: No 00:16:46.921 Associated with SR-IOV VF: No 00:16:46.921 Max Data Transfer Size: 131072 00:16:46.921 Max Number of Namespaces: 0 00:16:46.921 Max Number of I/O Queues: 1024 00:16:46.921 NVMe Specification Version (VS): 1.3 00:16:46.921 NVMe Specification Version (Identify): 1.3 00:16:46.921 Maximum Queue Entries: 128 00:16:46.921 Contiguous Queues Required: Yes 00:16:46.921 Arbitration Mechanisms Supported 00:16:46.921 Weighted Round Robin: Not Supported 00:16:46.921 Vendor Specific: Not Supported 00:16:46.921 Reset Timeout: 15000 ms 00:16:46.921 Doorbell Stride: 4 bytes 00:16:46.921 NVM Subsystem Reset: Not Supported 00:16:46.921 Command Sets Supported 00:16:46.921 NVM Command Set: Supported 00:16:46.921 Boot Partition: Not Supported 00:16:46.921 Memory Page Size Minimum: 4096 bytes 00:16:46.921 Memory Page Size Maximum: 4096 bytes 00:16:46.921 Persistent Memory Region: Not Supported 00:16:46.921 Optional Asynchronous Events Supported 00:16:46.921 Namespace Attribute Notices: Not Supported 00:16:46.921 Firmware Activation Notices: Not Supported 00:16:46.921 ANA Change Notices: Not Supported 00:16:46.921 PLE Aggregate Log Change Notices: Not Supported 00:16:46.921 LBA Status Info Alert Notices: Not Supported 00:16:46.921 EGE Aggregate Log Change Notices: Not Supported 00:16:46.921 Normal NVM Subsystem Shutdown event: Not Supported 00:16:46.921 Zone Descriptor Change Notices: Not Supported 00:16:46.921 Discovery Log Change Notices: Supported 00:16:46.921 Controller Attributes 00:16:46.921 128-bit Host Identifier: Not Supported 00:16:46.921 Non-Operational Permissive Mode: Not Supported 00:16:46.921 NVM Sets: Not Supported 00:16:46.921 Read Recovery Levels: Not Supported 00:16:46.921 Endurance Groups: Not Supported 00:16:46.921 Predictable Latency Mode: Not Supported 00:16:46.921 Traffic Based Keep ALive: Not Supported 00:16:46.921 Namespace Granularity: Not Supported 00:16:46.921 SQ Associations: Not Supported 00:16:46.921 UUID List: Not Supported 00:16:46.921 Multi-Domain Subsystem: Not Supported 00:16:46.921 Fixed Capacity Management: Not Supported 00:16:46.921 Variable Capacity Management: Not Supported 00:16:46.921 Delete Endurance Group: Not Supported 00:16:46.921 Delete NVM Set: Not Supported 00:16:46.921 Extended LBA Formats Supported: Not Supported 00:16:46.921 Flexible Data Placement Supported: Not Supported 00:16:46.921 00:16:46.921 Controller Memory Buffer Support 00:16:46.921 ================================ 00:16:46.921 Supported: No 00:16:46.921 00:16:46.921 Persistent Memory Region Support 00:16:46.921 ================================ 00:16:46.921 Supported: No 00:16:46.921 00:16:46.921 Admin Command Set Attributes 00:16:46.921 ============================ 00:16:46.921 Security Send/Receive: Not Supported 00:16:46.921 Format NVM: Not Supported 00:16:46.921 Firmware Activate/Download: Not Supported 00:16:46.921 Namespace Management: Not Supported 00:16:46.921 Device Self-Test: Not Supported 00:16:46.921 Directives: Not Supported 00:16:46.921 NVMe-MI: Not Supported 00:16:46.921 Virtualization Management: Not Supported 00:16:46.921 Doorbell Buffer Config: Not Supported 00:16:46.921 Get LBA Status Capability: Not Supported 00:16:46.921 Command & Feature Lockdown Capability: Not Supported 00:16:46.921 Abort Command Limit: 1 00:16:46.921 Async Event Request Limit: 4 00:16:46.921 Number of Firmware Slots: N/A 00:16:46.921 Firmware Slot 1 Read-Only: N/A 00:16:46.921 Firmware Activation Without Reset: N/A 00:16:46.921 Multiple Update Detection Support: N/A 00:16:46.921 Firmware Update Granularity: No Information Provided 00:16:46.921 Per-Namespace SMART Log: No 00:16:46.921 Asymmetric Namespace Access Log Page: Not Supported 00:16:46.921 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:46.921 Command Effects Log Page: Not Supported 00:16:46.921 Get Log Page Extended Data: Supported 00:16:46.921 Telemetry Log Pages: Not Supported 00:16:46.921 Persistent Event Log Pages: Not Supported 00:16:46.921 Supported Log Pages Log Page: May Support 00:16:46.922 Commands Supported & Effects Log Page: Not Supported 00:16:46.922 Feature Identifiers & Effects Log Page:May Support 00:16:46.922 NVMe-MI Commands & Effects Log Page: May Support 00:16:46.922 Data Area 4 for Telemetry Log: Not Supported 00:16:46.922 Error Log Page Entries Supported: 128 00:16:46.922 Keep Alive: Not Supported 00:16:46.922 00:16:46.922 NVM Command Set Attributes 00:16:46.922 ========================== 00:16:46.922 Submission Queue Entry Size 00:16:46.922 Max: 1 00:16:46.922 Min: 1 00:16:46.922 Completion Queue Entry Size 00:16:46.922 Max: 1 00:16:46.922 Min: 1 00:16:46.922 Number of Namespaces: 0 00:16:46.922 Compare Command: Not Supported 00:16:46.922 Write Uncorrectable Command: Not Supported 00:16:46.922 Dataset Management Command: Not Supported 00:16:46.922 Write Zeroes Command: Not Supported 00:16:46.922 Set Features Save Field: Not Supported 00:16:46.922 Reservations: Not Supported 00:16:46.922 Timestamp: Not Supported 00:16:46.922 Copy: Not Supported 00:16:46.922 Volatile Write Cache: Not Present 00:16:46.922 Atomic Write Unit (Normal): 1 00:16:46.922 Atomic Write Unit (PFail): 1 00:16:46.922 Atomic Compare & Write Unit: 1 00:16:46.922 Fused Compare & Write: Supported 00:16:46.922 Scatter-Gather List 00:16:46.922 SGL Command Set: Supported 00:16:46.922 SGL Keyed: Supported 00:16:46.922 SGL Bit Bucket Descriptor: Not Supported 00:16:46.922 SGL Metadata Pointer: Not Supported 00:16:46.922 Oversized SGL: Not Supported 00:16:46.922 SGL Metadata Address: Not Supported 00:16:46.922 SGL Offset: Supported 00:16:46.922 Transport SGL Data Block: Not Supported 00:16:46.922 Replay Protected Memory Block: Not Supported 00:16:46.922 00:16:46.922 Firmware Slot Information 00:16:46.922 ========================= 00:16:46.922 Active slot: 0 00:16:46.922 00:16:46.922 00:16:46.922 Error Log 00:16:46.922 ========= 00:16:46.922 00:16:46.922 Active Namespaces 00:16:46.922 ================= 00:16:46.922 Discovery Log Page 00:16:46.922 ================== 00:16:46.922 Generation Counter: 2 00:16:46.922 Number of Records: 2 00:16:46.922 Record Format: 0 00:16:46.922 00:16:46.922 Discovery Log Entry 0 00:16:46.922 ---------------------- 00:16:46.922 Transport Type: 3 (TCP) 00:16:46.922 Address Family: 1 (IPv4) 00:16:46.922 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:46.922 Entry Flags: 00:16:46.922 Duplicate Returned Information: 1 00:16:46.922 Explicit Persistent Connection Support for Discovery: 1 00:16:46.922 Transport Requirements: 00:16:46.922 Secure Channel: Not Required 00:16:46.922 Port ID: 0 (0x0000) 00:16:46.922 Controller ID: 65535 (0xffff) 00:16:46.922 Admin Max SQ Size: 128 00:16:46.922 Transport Service Identifier: 4420 00:16:46.922 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:46.922 Transport Address: 10.0.0.3 00:16:46.922 Discovery Log Entry 1 00:16:46.922 ---------------------- 00:16:46.922 Transport Type: 3 (TCP) 00:16:46.922 Address Family: 1 (IPv4) 00:16:46.922 Subsystem Type: 2 (NVM Subsystem) 00:16:46.922 Entry Flags: 00:16:46.922 Duplicate Returned Information: 0 00:16:46.922 Explicit Persistent Connection Support for Discovery: 0 00:16:46.922 Transport Requirements: 00:16:46.922 Secure Channel: Not Required 00:16:46.922 Port ID: 0 (0x0000) 00:16:46.922 Controller ID: 65535 (0xffff) 00:16:46.922 Admin Max SQ Size: 128 00:16:46.922 Transport Service Identifier: 4420 00:16:46.922 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:46.922 Transport Address: 10.0.0.3 [2024-11-19 09:44:34.495044] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:16:46.922 [2024-11-19 09:44:34.495058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157d740) on tqpair=0x1519750 00:16:46.922 [2024-11-19 09:44:34.495065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.922 [2024-11-19 09:44:34.495071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157d8c0) on tqpair=0x1519750 00:16:46.922 [2024-11-19 09:44:34.495076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.922 [2024-11-19 09:44:34.495081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157da40) on tqpair=0x1519750 00:16:46.922 [2024-11-19 09:44:34.495086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.922 [2024-11-19 09:44:34.495091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dbc0) on tqpair=0x1519750 00:16:46.922 [2024-11-19 09:44:34.495096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.922 [2024-11-19 09:44:34.495117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.922 [2024-11-19 09:44:34.495122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.922 [2024-11-19 09:44:34.495126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1519750) 00:16:46.922 [2024-11-19 09:44:34.495135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.922 [2024-11-19 09:44:34.495158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dbc0, cid 3, qid 0 00:16:46.922 [2024-11-19 09:44:34.495223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.922 [2024-11-19 09:44:34.495232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.922 [2024-11-19 09:44:34.495236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.922 [2024-11-19 09:44:34.495241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dbc0) on tqpair=0x1519750 00:16:46.922 [2024-11-19 09:44:34.495249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.922 [2024-11-19 09:44:34.495254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.922 [2024-11-19 09:44:34.495258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1519750) 00:16:46.922 [2024-11-19 09:44:34.495266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.922 [2024-11-19 09:44:34.495290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dbc0, cid 3, qid 0 00:16:46.922 [2024-11-19 09:44:34.495360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.922 [2024-11-19 09:44:34.495367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.922 [2024-11-19 09:44:34.495370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.922 [2024-11-19 09:44:34.495375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dbc0) on tqpair=0x1519750 00:16:46.922 [2024-11-19 09:44:34.495380] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:16:46.922 [2024-11-19 09:44:34.495385] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:16:46.922 [2024-11-19 09:44:34.495396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.922 [2024-11-19 09:44:34.495400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.922 [2024-11-19 09:44:34.495404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1519750) 00:16:46.922 [2024-11-19 09:44:34.495412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.922 [2024-11-19 09:44:34.495429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dbc0, cid 3, qid 0 00:16:46.922 [2024-11-19 09:44:34.495483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.922 [2024-11-19 09:44:34.495489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.922 [2024-11-19 09:44:34.495493] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.922 [2024-11-19 09:44:34.495497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dbc0) on tqpair=0x1519750 00:16:46.922 [2024-11-19 09:44:34.495508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.922 [2024-11-19 09:44:34.495513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.922 [2024-11-19 09:44:34.495517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1519750) 00:16:46.922 [2024-11-19 09:44:34.495525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.922 [2024-11-19 09:44:34.495542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dbc0, cid 3, qid 0 00:16:46.922 [2024-11-19 09:44:34.495583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.922 [2024-11-19 09:44:34.495590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.922 [2024-11-19 09:44:34.495594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.922 [2024-11-19 09:44:34.495598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dbc0) on tqpair=0x1519750 00:16:46.922 [2024-11-19 09:44:34.495609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.922 [2024-11-19 09:44:34.495613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.922 [2024-11-19 09:44:34.495617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1519750) 00:16:46.922 [2024-11-19 09:44:34.495625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.922 [2024-11-19 09:44:34.495646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dbc0, cid 3, qid 0 00:16:46.923 [2024-11-19 09:44:34.495696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.923 [2024-11-19 09:44:34.495708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.923 [2024-11-19 09:44:34.495714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.495721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dbc0) on tqpair=0x1519750 00:16:46.923 [2024-11-19 09:44:34.495738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.495746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.495753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1519750) 00:16:46.923 [2024-11-19 09:44:34.495764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.923 [2024-11-19 09:44:34.495796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dbc0, cid 3, qid 0 00:16:46.923 [2024-11-19 09:44:34.495846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.923 [2024-11-19 09:44:34.495858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.923 [2024-11-19 09:44:34.495865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.495872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dbc0) on tqpair=0x1519750 00:16:46.923 [2024-11-19 09:44:34.495889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.495898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.495904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1519750) 00:16:46.923 [2024-11-19 09:44:34.495919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.923 [2024-11-19 09:44:34.495950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dbc0, cid 3, qid 0 00:16:46.923 [2024-11-19 09:44:34.495994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.923 [2024-11-19 09:44:34.496001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.923 [2024-11-19 09:44:34.496005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.496009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dbc0) on tqpair=0x1519750 00:16:46.923 [2024-11-19 09:44:34.496020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.496026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.496030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1519750) 00:16:46.923 [2024-11-19 09:44:34.496037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.923 [2024-11-19 09:44:34.496056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dbc0, cid 3, qid 0 00:16:46.923 [2024-11-19 09:44:34.496104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.923 [2024-11-19 09:44:34.496111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.923 [2024-11-19 09:44:34.496114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.496119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dbc0) on tqpair=0x1519750 00:16:46.923 [2024-11-19 09:44:34.496129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.496134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.496138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1519750) 00:16:46.923 [2024-11-19 09:44:34.496145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.923 [2024-11-19 09:44:34.496163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dbc0, cid 3, qid 0 00:16:46.923 [2024-11-19 09:44:34.500222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.923 [2024-11-19 09:44:34.500249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.923 [2024-11-19 09:44:34.500254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.500260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dbc0) on tqpair=0x1519750 00:16:46.923 [2024-11-19 09:44:34.500277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.500283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.500287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1519750) 00:16:46.923 [2024-11-19 09:44:34.500297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.923 [2024-11-19 09:44:34.500326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157dbc0, cid 3, qid 0 00:16:46.923 [2024-11-19 09:44:34.500374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:46.923 [2024-11-19 09:44:34.500381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:46.923 [2024-11-19 09:44:34.500385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:46.923 [2024-11-19 09:44:34.500390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157dbc0) on tqpair=0x1519750 00:16:46.923 [2024-11-19 09:44:34.500399] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:16:46.923 00:16:46.923 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:47.185 [2024-11-19 09:44:34.545031] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:47.185 [2024-11-19 09:44:34.545087] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74287 ] 00:16:47.185 [2024-11-19 09:44:34.704818] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:16:47.185 [2024-11-19 09:44:34.704898] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:47.185 [2024-11-19 09:44:34.704905] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:47.185 [2024-11-19 09:44:34.704918] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:47.185 [2024-11-19 09:44:34.704929] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:47.185 [2024-11-19 09:44:34.705237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:16:47.185 [2024-11-19 09:44:34.705323] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf35750 0 00:16:47.185 [2024-11-19 09:44:34.720282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:47.185 [2024-11-19 09:44:34.720309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:47.185 [2024-11-19 09:44:34.720333] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:47.185 [2024-11-19 09:44:34.720337] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:47.185 [2024-11-19 09:44:34.720372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.185 [2024-11-19 09:44:34.720379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.185 [2024-11-19 09:44:34.720384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf35750) 00:16:47.185 [2024-11-19 09:44:34.720401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:47.185 [2024-11-19 09:44:34.720435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99740, cid 0, qid 0 00:16:47.185 [2024-11-19 09:44:34.728259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.186 [2024-11-19 09:44:34.728284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.186 [2024-11-19 09:44:34.728290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99740) on tqpair=0xf35750 00:16:47.186 [2024-11-19 09:44:34.728313] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:47.186 [2024-11-19 09:44:34.728323] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:16:47.186 [2024-11-19 09:44:34.728330] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:16:47.186 [2024-11-19 09:44:34.728349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf35750) 00:16:47.186 [2024-11-19 09:44:34.728373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.186 [2024-11-19 09:44:34.728403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99740, cid 0, qid 0 00:16:47.186 [2024-11-19 09:44:34.728457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.186 [2024-11-19 09:44:34.728464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.186 [2024-11-19 09:44:34.728468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99740) on tqpair=0xf35750 00:16:47.186 [2024-11-19 09:44:34.728479] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:16:47.186 [2024-11-19 09:44:34.728487] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:16:47.186 [2024-11-19 09:44:34.728495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf35750) 00:16:47.186 [2024-11-19 09:44:34.728511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.186 [2024-11-19 09:44:34.728530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99740, cid 0, qid 0 00:16:47.186 [2024-11-19 09:44:34.728575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.186 [2024-11-19 09:44:34.728581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.186 [2024-11-19 09:44:34.728585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99740) on tqpair=0xf35750 00:16:47.186 [2024-11-19 09:44:34.728596] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:16:47.186 [2024-11-19 09:44:34.728605] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:47.186 [2024-11-19 09:44:34.728613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf35750) 00:16:47.186 [2024-11-19 09:44:34.728628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.186 [2024-11-19 09:44:34.728646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99740, cid 0, qid 0 00:16:47.186 [2024-11-19 09:44:34.728696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.186 [2024-11-19 09:44:34.728703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.186 [2024-11-19 09:44:34.728707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99740) on tqpair=0xf35750 00:16:47.186 [2024-11-19 09:44:34.728717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:47.186 [2024-11-19 09:44:34.728728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf35750) 00:16:47.186 [2024-11-19 09:44:34.728744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.186 [2024-11-19 09:44:34.728760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99740, cid 0, qid 0 00:16:47.186 [2024-11-19 09:44:34.728808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.186 [2024-11-19 09:44:34.728815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.186 [2024-11-19 09:44:34.728818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99740) on tqpair=0xf35750 00:16:47.186 [2024-11-19 09:44:34.728828] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:47.186 [2024-11-19 09:44:34.728834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:47.186 [2024-11-19 09:44:34.728842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:47.186 [2024-11-19 09:44:34.728954] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:16:47.186 [2024-11-19 09:44:34.728960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:47.186 [2024-11-19 09:44:34.728970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.728978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf35750) 00:16:47.186 [2024-11-19 09:44:34.728986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.186 [2024-11-19 09:44:34.729005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99740, cid 0, qid 0 00:16:47.186 [2024-11-19 09:44:34.729056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.186 [2024-11-19 09:44:34.729063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.186 [2024-11-19 09:44:34.729067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.729071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99740) on tqpair=0xf35750 00:16:47.186 [2024-11-19 09:44:34.729077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:47.186 [2024-11-19 09:44:34.729087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.729092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.729096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf35750) 00:16:47.186 [2024-11-19 09:44:34.729103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.186 [2024-11-19 09:44:34.729119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99740, cid 0, qid 0 00:16:47.186 [2024-11-19 09:44:34.729172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.186 [2024-11-19 09:44:34.729179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.186 [2024-11-19 09:44:34.729183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.729187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99740) on tqpair=0xf35750 00:16:47.186 [2024-11-19 09:44:34.729192] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:47.186 [2024-11-19 09:44:34.729198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:47.186 [2024-11-19 09:44:34.729219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:16:47.186 [2024-11-19 09:44:34.729238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:47.186 [2024-11-19 09:44:34.729249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.729254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf35750) 00:16:47.186 [2024-11-19 09:44:34.729262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.186 [2024-11-19 09:44:34.729283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99740, cid 0, qid 0 00:16:47.186 [2024-11-19 09:44:34.729385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:47.186 [2024-11-19 09:44:34.729392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:47.186 [2024-11-19 09:44:34.729397] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.729401] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf35750): datao=0, datal=4096, cccid=0 00:16:47.186 [2024-11-19 09:44:34.729406] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf99740) on tqpair(0xf35750): expected_datao=0, payload_size=4096 00:16:47.186 [2024-11-19 09:44:34.729411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.729421] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.729426] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.729435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.186 [2024-11-19 09:44:34.729441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.186 [2024-11-19 09:44:34.729444] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.186 [2024-11-19 09:44:34.729449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99740) on tqpair=0xf35750 00:16:47.186 [2024-11-19 09:44:34.729458] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:16:47.186 [2024-11-19 09:44:34.729464] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:16:47.186 [2024-11-19 09:44:34.729469] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:16:47.186 [2024-11-19 09:44:34.729473] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:16:47.186 [2024-11-19 09:44:34.729479] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:16:47.186 [2024-11-19 09:44:34.729484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:16:47.187 [2024-11-19 09:44:34.729498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:47.187 [2024-11-19 09:44:34.729507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf35750) 00:16:47.187 [2024-11-19 09:44:34.729524] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.187 [2024-11-19 09:44:34.729543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99740, cid 0, qid 0 00:16:47.187 [2024-11-19 09:44:34.729591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.187 [2024-11-19 09:44:34.729598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.187 [2024-11-19 09:44:34.729602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99740) on tqpair=0xf35750 00:16:47.187 [2024-11-19 09:44:34.729614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf35750) 00:16:47.187 [2024-11-19 09:44:34.729629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.187 [2024-11-19 09:44:34.729636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf35750) 00:16:47.187 [2024-11-19 09:44:34.729650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.187 [2024-11-19 09:44:34.729657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf35750) 00:16:47.187 [2024-11-19 09:44:34.729671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.187 [2024-11-19 09:44:34.729678] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.187 [2024-11-19 09:44:34.729691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.187 [2024-11-19 09:44:34.729697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:47.187 [2024-11-19 09:44:34.729710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:47.187 [2024-11-19 09:44:34.729718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf35750) 00:16:47.187 [2024-11-19 09:44:34.729729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.187 [2024-11-19 09:44:34.729749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99740, cid 0, qid 0 00:16:47.187 [2024-11-19 09:44:34.729756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf998c0, cid 1, qid 0 00:16:47.187 [2024-11-19 09:44:34.729761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99a40, cid 2, qid 0 00:16:47.187 [2024-11-19 09:44:34.729766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.187 [2024-11-19 09:44:34.729771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99d40, cid 4, qid 0 00:16:47.187 [2024-11-19 09:44:34.729860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.187 [2024-11-19 09:44:34.729867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.187 [2024-11-19 09:44:34.729871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99d40) on tqpair=0xf35750 00:16:47.187 [2024-11-19 09:44:34.729880] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:16:47.187 [2024-11-19 09:44:34.729886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:47.187 [2024-11-19 09:44:34.729895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:16:47.187 [2024-11-19 09:44:34.729906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:47.187 [2024-11-19 09:44:34.729914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.729923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf35750) 00:16:47.187 [2024-11-19 09:44:34.729930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.187 [2024-11-19 09:44:34.729948] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99d40, cid 4, qid 0 00:16:47.187 [2024-11-19 09:44:34.729999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.187 [2024-11-19 09:44:34.730006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.187 [2024-11-19 09:44:34.730009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.730014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99d40) on tqpair=0xf35750 00:16:47.187 [2024-11-19 09:44:34.730081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:16:47.187 [2024-11-19 09:44:34.730093] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:47.187 [2024-11-19 09:44:34.730102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.730106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf35750) 00:16:47.187 [2024-11-19 09:44:34.730114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.187 [2024-11-19 09:44:34.730132] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99d40, cid 4, qid 0 00:16:47.187 [2024-11-19 09:44:34.730197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:47.187 [2024-11-19 09:44:34.730204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:47.187 [2024-11-19 09:44:34.730221] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.730226] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf35750): datao=0, datal=4096, cccid=4 00:16:47.187 [2024-11-19 09:44:34.730231] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf99d40) on tqpair(0xf35750): expected_datao=0, payload_size=4096 00:16:47.187 [2024-11-19 09:44:34.730236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.730244] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.730248] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.730257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.187 [2024-11-19 09:44:34.730263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.187 [2024-11-19 09:44:34.730267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.730271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99d40) on tqpair=0xf35750 00:16:47.187 [2024-11-19 09:44:34.730288] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:16:47.187 [2024-11-19 09:44:34.730300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:16:47.187 [2024-11-19 09:44:34.730310] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:16:47.187 [2024-11-19 09:44:34.730319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.187 [2024-11-19 09:44:34.730323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf35750) 00:16:47.187 [2024-11-19 09:44:34.730331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.187 [2024-11-19 09:44:34.730352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99d40, cid 4, qid 0 00:16:47.188 [2024-11-19 09:44:34.730498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:47.188 [2024-11-19 09:44:34.730505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:47.188 [2024-11-19 09:44:34.730508] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730512] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf35750): datao=0, datal=4096, cccid=4 00:16:47.188 [2024-11-19 09:44:34.730517] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf99d40) on tqpair(0xf35750): expected_datao=0, payload_size=4096 00:16:47.188 [2024-11-19 09:44:34.730522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730529] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730533] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.188 [2024-11-19 09:44:34.730548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.188 [2024-11-19 09:44:34.730552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99d40) on tqpair=0xf35750 00:16:47.188 [2024-11-19 09:44:34.730575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:47.188 [2024-11-19 09:44:34.730586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:47.188 [2024-11-19 09:44:34.730595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf35750) 00:16:47.188 [2024-11-19 09:44:34.730607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.188 [2024-11-19 09:44:34.730626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99d40, cid 4, qid 0 00:16:47.188 [2024-11-19 09:44:34.730684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:47.188 [2024-11-19 09:44:34.730691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:47.188 [2024-11-19 09:44:34.730694] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730698] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf35750): datao=0, datal=4096, cccid=4 00:16:47.188 [2024-11-19 09:44:34.730703] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf99d40) on tqpair(0xf35750): expected_datao=0, payload_size=4096 00:16:47.188 [2024-11-19 09:44:34.730708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730715] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730719] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.188 [2024-11-19 09:44:34.730734] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.188 [2024-11-19 09:44:34.730738] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99d40) on tqpair=0xf35750 00:16:47.188 [2024-11-19 09:44:34.730751] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:47.188 [2024-11-19 09:44:34.730760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:16:47.188 [2024-11-19 09:44:34.730772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:16:47.188 [2024-11-19 09:44:34.730779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:47.188 [2024-11-19 09:44:34.730784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:47.188 [2024-11-19 09:44:34.730790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:16:47.188 [2024-11-19 09:44:34.730796] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:16:47.188 [2024-11-19 09:44:34.730801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:16:47.188 [2024-11-19 09:44:34.730807] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:16:47.188 [2024-11-19 09:44:34.730826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf35750) 00:16:47.188 [2024-11-19 09:44:34.730838] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.188 [2024-11-19 09:44:34.730847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf35750) 00:16:47.188 [2024-11-19 09:44:34.730861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.188 [2024-11-19 09:44:34.730886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99d40, cid 4, qid 0 00:16:47.188 [2024-11-19 09:44:34.730893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99ec0, cid 5, qid 0 00:16:47.188 [2024-11-19 09:44:34.730964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.188 [2024-11-19 09:44:34.730971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.188 [2024-11-19 09:44:34.730975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.730979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99d40) on tqpair=0xf35750 00:16:47.188 [2024-11-19 09:44:34.730986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.188 [2024-11-19 09:44:34.730992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.188 [2024-11-19 09:44:34.730996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.731000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99ec0) on tqpair=0xf35750 00:16:47.188 [2024-11-19 09:44:34.731010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.731015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf35750) 00:16:47.188 [2024-11-19 09:44:34.731022] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.188 [2024-11-19 09:44:34.731039] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99ec0, cid 5, qid 0 00:16:47.188 [2024-11-19 09:44:34.731084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.188 [2024-11-19 09:44:34.731091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.188 [2024-11-19 09:44:34.731095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.731112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99ec0) on tqpair=0xf35750 00:16:47.188 [2024-11-19 09:44:34.731125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.188 [2024-11-19 09:44:34.731130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf35750) 00:16:47.188 [2024-11-19 09:44:34.731137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.189 [2024-11-19 09:44:34.731159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99ec0, cid 5, qid 0 00:16:47.189 [2024-11-19 09:44:34.731261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.189 [2024-11-19 09:44:34.731272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.189 [2024-11-19 09:44:34.731276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99ec0) on tqpair=0xf35750 00:16:47.189 [2024-11-19 09:44:34.731292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf35750) 00:16:47.189 [2024-11-19 09:44:34.731305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.189 [2024-11-19 09:44:34.731327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99ec0, cid 5, qid 0 00:16:47.189 [2024-11-19 09:44:34.731378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.189 [2024-11-19 09:44:34.731385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.189 [2024-11-19 09:44:34.731388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99ec0) on tqpair=0xf35750 00:16:47.189 [2024-11-19 09:44:34.731414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf35750) 00:16:47.189 [2024-11-19 09:44:34.731427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.189 [2024-11-19 09:44:34.731435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf35750) 00:16:47.189 [2024-11-19 09:44:34.731447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.189 [2024-11-19 09:44:34.731455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xf35750) 00:16:47.189 [2024-11-19 09:44:34.731466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.189 [2024-11-19 09:44:34.731474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf35750) 00:16:47.189 [2024-11-19 09:44:34.731485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.189 [2024-11-19 09:44:34.731504] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99ec0, cid 5, qid 0 00:16:47.189 [2024-11-19 09:44:34.731512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99d40, cid 4, qid 0 00:16:47.189 [2024-11-19 09:44:34.731517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf9a040, cid 6, qid 0 00:16:47.189 [2024-11-19 09:44:34.731522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf9a1c0, cid 7, qid 0 00:16:47.189 [2024-11-19 09:44:34.731661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:47.189 [2024-11-19 09:44:34.731668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:47.189 [2024-11-19 09:44:34.731672] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731676] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf35750): datao=0, datal=8192, cccid=5 00:16:47.189 [2024-11-19 09:44:34.731681] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf99ec0) on tqpair(0xf35750): expected_datao=0, payload_size=8192 00:16:47.189 [2024-11-19 09:44:34.731686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731703] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731708] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:47.189 [2024-11-19 09:44:34.731720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:47.189 [2024-11-19 09:44:34.731723] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731727] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf35750): datao=0, datal=512, cccid=4 00:16:47.189 [2024-11-19 09:44:34.731732] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf99d40) on tqpair(0xf35750): expected_datao=0, payload_size=512 00:16:47.189 [2024-11-19 09:44:34.731737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731743] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731747] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:47.189 [2024-11-19 09:44:34.731759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:47.189 [2024-11-19 09:44:34.731763] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731767] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf35750): datao=0, datal=512, cccid=6 00:16:47.189 [2024-11-19 09:44:34.731771] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf9a040) on tqpair(0xf35750): expected_datao=0, payload_size=512 00:16:47.189 [2024-11-19 09:44:34.731776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731782] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731786] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:47.189 [2024-11-19 09:44:34.731798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:47.189 [2024-11-19 09:44:34.731801] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731805] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf35750): datao=0, datal=4096, cccid=7 00:16:47.189 [2024-11-19 09:44:34.731810] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf9a1c0) on tqpair(0xf35750): expected_datao=0, payload_size=4096 00:16:47.189 [2024-11-19 09:44:34.731814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731821] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731825] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.189 [2024-11-19 09:44:34.731840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.189 [2024-11-19 09:44:34.731843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99ec0) on tqpair=0xf35750 00:16:47.189 [2024-11-19 09:44:34.731864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.189 [2024-11-19 09:44:34.731871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.189 [2024-11-19 09:44:34.731875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.189 [2024-11-19 09:44:34.731879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99d40) on tqpair=0xf35750 00:16:47.189 [2024-11-19 09:44:34.731892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.189 [2024-11-19 09:44:34.731899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.189 [2024-11-19 09:44:34.731902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.190 [2024-11-19 09:44:34.731906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf9a040) on tqpair=0xf35750 00:16:47.190 [2024-11-19 09:44:34.731914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.190 [2024-11-19 09:44:34.731920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.190 [2024-11-19 09:44:34.731924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.190 [2024-11-19 09:44:34.731928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf9a1c0) on tqpair=0xf35750 00:16:47.190 ===================================================== 00:16:47.190 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:47.190 ===================================================== 00:16:47.190 Controller Capabilities/Features 00:16:47.190 ================================ 00:16:47.190 Vendor ID: 8086 00:16:47.190 Subsystem Vendor ID: 8086 00:16:47.190 Serial Number: SPDK00000000000001 00:16:47.190 Model Number: SPDK bdev Controller 00:16:47.190 Firmware Version: 25.01 00:16:47.190 Recommended Arb Burst: 6 00:16:47.190 IEEE OUI Identifier: e4 d2 5c 00:16:47.190 Multi-path I/O 00:16:47.190 May have multiple subsystem ports: Yes 00:16:47.190 May have multiple controllers: Yes 00:16:47.190 Associated with SR-IOV VF: No 00:16:47.190 Max Data Transfer Size: 131072 00:16:47.190 Max Number of Namespaces: 32 00:16:47.190 Max Number of I/O Queues: 127 00:16:47.190 NVMe Specification Version (VS): 1.3 00:16:47.190 NVMe Specification Version (Identify): 1.3 00:16:47.190 Maximum Queue Entries: 128 00:16:47.190 Contiguous Queues Required: Yes 00:16:47.190 Arbitration Mechanisms Supported 00:16:47.190 Weighted Round Robin: Not Supported 00:16:47.190 Vendor Specific: Not Supported 00:16:47.190 Reset Timeout: 15000 ms 00:16:47.190 Doorbell Stride: 4 bytes 00:16:47.190 NVM Subsystem Reset: Not Supported 00:16:47.190 Command Sets Supported 00:16:47.190 NVM Command Set: Supported 00:16:47.190 Boot Partition: Not Supported 00:16:47.190 Memory Page Size Minimum: 4096 bytes 00:16:47.190 Memory Page Size Maximum: 4096 bytes 00:16:47.190 Persistent Memory Region: Not Supported 00:16:47.190 Optional Asynchronous Events Supported 00:16:47.190 Namespace Attribute Notices: Supported 00:16:47.190 Firmware Activation Notices: Not Supported 00:16:47.190 ANA Change Notices: Not Supported 00:16:47.190 PLE Aggregate Log Change Notices: Not Supported 00:16:47.190 LBA Status Info Alert Notices: Not Supported 00:16:47.190 EGE Aggregate Log Change Notices: Not Supported 00:16:47.190 Normal NVM Subsystem Shutdown event: Not Supported 00:16:47.190 Zone Descriptor Change Notices: Not Supported 00:16:47.190 Discovery Log Change Notices: Not Supported 00:16:47.190 Controller Attributes 00:16:47.190 128-bit Host Identifier: Supported 00:16:47.190 Non-Operational Permissive Mode: Not Supported 00:16:47.190 NVM Sets: Not Supported 00:16:47.190 Read Recovery Levels: Not Supported 00:16:47.190 Endurance Groups: Not Supported 00:16:47.190 Predictable Latency Mode: Not Supported 00:16:47.190 Traffic Based Keep ALive: Not Supported 00:16:47.190 Namespace Granularity: Not Supported 00:16:47.190 SQ Associations: Not Supported 00:16:47.190 UUID List: Not Supported 00:16:47.190 Multi-Domain Subsystem: Not Supported 00:16:47.190 Fixed Capacity Management: Not Supported 00:16:47.190 Variable Capacity Management: Not Supported 00:16:47.190 Delete Endurance Group: Not Supported 00:16:47.190 Delete NVM Set: Not Supported 00:16:47.190 Extended LBA Formats Supported: Not Supported 00:16:47.190 Flexible Data Placement Supported: Not Supported 00:16:47.190 00:16:47.190 Controller Memory Buffer Support 00:16:47.190 ================================ 00:16:47.190 Supported: No 00:16:47.190 00:16:47.190 Persistent Memory Region Support 00:16:47.190 ================================ 00:16:47.190 Supported: No 00:16:47.190 00:16:47.190 Admin Command Set Attributes 00:16:47.190 ============================ 00:16:47.190 Security Send/Receive: Not Supported 00:16:47.190 Format NVM: Not Supported 00:16:47.190 Firmware Activate/Download: Not Supported 00:16:47.190 Namespace Management: Not Supported 00:16:47.190 Device Self-Test: Not Supported 00:16:47.190 Directives: Not Supported 00:16:47.190 NVMe-MI: Not Supported 00:16:47.190 Virtualization Management: Not Supported 00:16:47.190 Doorbell Buffer Config: Not Supported 00:16:47.190 Get LBA Status Capability: Not Supported 00:16:47.190 Command & Feature Lockdown Capability: Not Supported 00:16:47.190 Abort Command Limit: 4 00:16:47.190 Async Event Request Limit: 4 00:16:47.190 Number of Firmware Slots: N/A 00:16:47.190 Firmware Slot 1 Read-Only: N/A 00:16:47.190 Firmware Activation Without Reset: N/A 00:16:47.190 Multiple Update Detection Support: N/A 00:16:47.190 Firmware Update Granularity: No Information Provided 00:16:47.190 Per-Namespace SMART Log: No 00:16:47.190 Asymmetric Namespace Access Log Page: Not Supported 00:16:47.190 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:47.190 Command Effects Log Page: Supported 00:16:47.190 Get Log Page Extended Data: Supported 00:16:47.190 Telemetry Log Pages: Not Supported 00:16:47.190 Persistent Event Log Pages: Not Supported 00:16:47.190 Supported Log Pages Log Page: May Support 00:16:47.190 Commands Supported & Effects Log Page: Not Supported 00:16:47.190 Feature Identifiers & Effects Log Page:May Support 00:16:47.190 NVMe-MI Commands & Effects Log Page: May Support 00:16:47.190 Data Area 4 for Telemetry Log: Not Supported 00:16:47.190 Error Log Page Entries Supported: 128 00:16:47.190 Keep Alive: Supported 00:16:47.190 Keep Alive Granularity: 10000 ms 00:16:47.190 00:16:47.191 NVM Command Set Attributes 00:16:47.191 ========================== 00:16:47.191 Submission Queue Entry Size 00:16:47.191 Max: 64 00:16:47.191 Min: 64 00:16:47.191 Completion Queue Entry Size 00:16:47.191 Max: 16 00:16:47.191 Min: 16 00:16:47.191 Number of Namespaces: 32 00:16:47.191 Compare Command: Supported 00:16:47.191 Write Uncorrectable Command: Not Supported 00:16:47.191 Dataset Management Command: Supported 00:16:47.191 Write Zeroes Command: Supported 00:16:47.191 Set Features Save Field: Not Supported 00:16:47.191 Reservations: Supported 00:16:47.191 Timestamp: Not Supported 00:16:47.191 Copy: Supported 00:16:47.191 Volatile Write Cache: Present 00:16:47.191 Atomic Write Unit (Normal): 1 00:16:47.191 Atomic Write Unit (PFail): 1 00:16:47.191 Atomic Compare & Write Unit: 1 00:16:47.191 Fused Compare & Write: Supported 00:16:47.191 Scatter-Gather List 00:16:47.191 SGL Command Set: Supported 00:16:47.191 SGL Keyed: Supported 00:16:47.191 SGL Bit Bucket Descriptor: Not Supported 00:16:47.191 SGL Metadata Pointer: Not Supported 00:16:47.191 Oversized SGL: Not Supported 00:16:47.191 SGL Metadata Address: Not Supported 00:16:47.191 SGL Offset: Supported 00:16:47.191 Transport SGL Data Block: Not Supported 00:16:47.191 Replay Protected Memory Block: Not Supported 00:16:47.191 00:16:47.191 Firmware Slot Information 00:16:47.191 ========================= 00:16:47.191 Active slot: 1 00:16:47.191 Slot 1 Firmware Revision: 25.01 00:16:47.191 00:16:47.191 00:16:47.191 Commands Supported and Effects 00:16:47.191 ============================== 00:16:47.191 Admin Commands 00:16:47.191 -------------- 00:16:47.191 Get Log Page (02h): Supported 00:16:47.191 Identify (06h): Supported 00:16:47.191 Abort (08h): Supported 00:16:47.191 Set Features (09h): Supported 00:16:47.191 Get Features (0Ah): Supported 00:16:47.191 Asynchronous Event Request (0Ch): Supported 00:16:47.191 Keep Alive (18h): Supported 00:16:47.191 I/O Commands 00:16:47.191 ------------ 00:16:47.191 Flush (00h): Supported LBA-Change 00:16:47.191 Write (01h): Supported LBA-Change 00:16:47.191 Read (02h): Supported 00:16:47.191 Compare (05h): Supported 00:16:47.191 Write Zeroes (08h): Supported LBA-Change 00:16:47.191 Dataset Management (09h): Supported LBA-Change 00:16:47.191 Copy (19h): Supported LBA-Change 00:16:47.191 00:16:47.191 Error Log 00:16:47.191 ========= 00:16:47.191 00:16:47.191 Arbitration 00:16:47.191 =========== 00:16:47.191 Arbitration Burst: 1 00:16:47.191 00:16:47.191 Power Management 00:16:47.191 ================ 00:16:47.191 Number of Power States: 1 00:16:47.191 Current Power State: Power State #0 00:16:47.191 Power State #0: 00:16:47.191 Max Power: 0.00 W 00:16:47.191 Non-Operational State: Operational 00:16:47.191 Entry Latency: Not Reported 00:16:47.191 Exit Latency: Not Reported 00:16:47.191 Relative Read Throughput: 0 00:16:47.191 Relative Read Latency: 0 00:16:47.191 Relative Write Throughput: 0 00:16:47.191 Relative Write Latency: 0 00:16:47.191 Idle Power: Not Reported 00:16:47.191 Active Power: Not Reported 00:16:47.191 Non-Operational Permissive Mode: Not Supported 00:16:47.191 00:16:47.191 Health Information 00:16:47.191 ================== 00:16:47.191 Critical Warnings: 00:16:47.191 Available Spare Space: OK 00:16:47.191 Temperature: OK 00:16:47.191 Device Reliability: OK 00:16:47.191 Read Only: No 00:16:47.191 Volatile Memory Backup: OK 00:16:47.191 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:47.191 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:47.191 Available Spare: 0% 00:16:47.191 Available Spare Threshold: 0% 00:16:47.191 Life Percentage Used:[2024-11-19 09:44:34.732037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.191 [2024-11-19 09:44:34.732043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf35750) 00:16:47.191 [2024-11-19 09:44:34.732051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.191 [2024-11-19 09:44:34.732073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf9a1c0, cid 7, qid 0 00:16:47.191 [2024-11-19 09:44:34.732121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.191 [2024-11-19 09:44:34.732128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.191 [2024-11-19 09:44:34.732131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.191 [2024-11-19 09:44:34.732135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf9a1c0) on tqpair=0xf35750 00:16:47.191 [2024-11-19 09:44:34.732176] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:16:47.191 [2024-11-19 09:44:34.732188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99740) on tqpair=0xf35750 00:16:47.191 [2024-11-19 09:44:34.732195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.191 [2024-11-19 09:44:34.732200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf998c0) on tqpair=0xf35750 00:16:47.191 [2024-11-19 09:44:34.732206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.191 [2024-11-19 09:44:34.736238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99a40) on tqpair=0xf35750 00:16:47.192 [2024-11-19 09:44:34.736245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.192 [2024-11-19 09:44:34.736251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.192 [2024-11-19 09:44:34.736256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.192 [2024-11-19 09:44:34.736267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.192 [2024-11-19 09:44:34.736285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.192 [2024-11-19 09:44:34.736314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.192 [2024-11-19 09:44:34.736365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.192 [2024-11-19 09:44:34.736373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.192 [2024-11-19 09:44:34.736377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.192 [2024-11-19 09:44:34.736390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.192 [2024-11-19 09:44:34.736406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.192 [2024-11-19 09:44:34.736427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.192 [2024-11-19 09:44:34.736494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.192 [2024-11-19 09:44:34.736501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.192 [2024-11-19 09:44:34.736504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.192 [2024-11-19 09:44:34.736514] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:16:47.192 [2024-11-19 09:44:34.736519] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:16:47.192 [2024-11-19 09:44:34.736530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.192 [2024-11-19 09:44:34.736546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.192 [2024-11-19 09:44:34.736562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.192 [2024-11-19 09:44:34.736607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.192 [2024-11-19 09:44:34.736614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.192 [2024-11-19 09:44:34.736618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.192 [2024-11-19 09:44:34.736633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.192 [2024-11-19 09:44:34.736650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.192 [2024-11-19 09:44:34.736667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.192 [2024-11-19 09:44:34.736711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.192 [2024-11-19 09:44:34.736718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.192 [2024-11-19 09:44:34.736722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.192 [2024-11-19 09:44:34.736736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.192 [2024-11-19 09:44:34.736752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.192 [2024-11-19 09:44:34.736768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.192 [2024-11-19 09:44:34.736816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.192 [2024-11-19 09:44:34.736822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.192 [2024-11-19 09:44:34.736826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.192 [2024-11-19 09:44:34.736841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.192 [2024-11-19 09:44:34.736856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.192 [2024-11-19 09:44:34.736872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.192 [2024-11-19 09:44:34.736918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.192 [2024-11-19 09:44:34.736924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.192 [2024-11-19 09:44:34.736929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.192 [2024-11-19 09:44:34.736943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.736952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.192 [2024-11-19 09:44:34.736959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.192 [2024-11-19 09:44:34.736975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.192 [2024-11-19 09:44:34.737020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.192 [2024-11-19 09:44:34.737026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.192 [2024-11-19 09:44:34.737030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.737034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.192 [2024-11-19 09:44:34.737045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.737049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.192 [2024-11-19 09:44:34.737053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.192 [2024-11-19 09:44:34.737060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.193 [2024-11-19 09:44:34.737076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.193 [2024-11-19 09:44:34.737121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.193 [2024-11-19 09:44:34.737128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.193 [2024-11-19 09:44:34.737132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.193 [2024-11-19 09:44:34.737146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.193 [2024-11-19 09:44:34.737162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.193 [2024-11-19 09:44:34.737178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.193 [2024-11-19 09:44:34.737241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.193 [2024-11-19 09:44:34.737250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.193 [2024-11-19 09:44:34.737255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.193 [2024-11-19 09:44:34.737270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.193 [2024-11-19 09:44:34.737286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.193 [2024-11-19 09:44:34.737306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.193 [2024-11-19 09:44:34.737355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.193 [2024-11-19 09:44:34.737361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.193 [2024-11-19 09:44:34.737365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.193 [2024-11-19 09:44:34.737380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.193 [2024-11-19 09:44:34.737396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.193 [2024-11-19 09:44:34.737413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.193 [2024-11-19 09:44:34.737461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.193 [2024-11-19 09:44:34.737468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.193 [2024-11-19 09:44:34.737472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.193 [2024-11-19 09:44:34.737486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.193 [2024-11-19 09:44:34.737502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.193 [2024-11-19 09:44:34.737519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.193 [2024-11-19 09:44:34.737566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.193 [2024-11-19 09:44:34.737573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.193 [2024-11-19 09:44:34.737576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.193 [2024-11-19 09:44:34.737591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.193 [2024-11-19 09:44:34.737607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.193 [2024-11-19 09:44:34.737623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.193 [2024-11-19 09:44:34.737667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.193 [2024-11-19 09:44:34.737674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.193 [2024-11-19 09:44:34.737678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.193 [2024-11-19 09:44:34.737693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.193 [2024-11-19 09:44:34.737709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.193 [2024-11-19 09:44:34.737725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.193 [2024-11-19 09:44:34.737770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.193 [2024-11-19 09:44:34.737777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.193 [2024-11-19 09:44:34.737781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.193 [2024-11-19 09:44:34.737796] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.193 [2024-11-19 09:44:34.737812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.193 [2024-11-19 09:44:34.737828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.193 [2024-11-19 09:44:34.737870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.193 [2024-11-19 09:44:34.737877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.193 [2024-11-19 09:44:34.737881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.193 [2024-11-19 09:44:34.737895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.193 [2024-11-19 09:44:34.737911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.193 [2024-11-19 09:44:34.737927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.193 [2024-11-19 09:44:34.737974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.193 [2024-11-19 09:44:34.737981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.193 [2024-11-19 09:44:34.737985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.737989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.193 [2024-11-19 09:44:34.738000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.738004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.738008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.193 [2024-11-19 09:44:34.738015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.193 [2024-11-19 09:44:34.738032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.193 [2024-11-19 09:44:34.738077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.193 [2024-11-19 09:44:34.738084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.193 [2024-11-19 09:44:34.738088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.738092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.193 [2024-11-19 09:44:34.738102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.738107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.738111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.193 [2024-11-19 09:44:34.738118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.193 [2024-11-19 09:44:34.738134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.193 [2024-11-19 09:44:34.738184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.193 [2024-11-19 09:44:34.738191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.193 [2024-11-19 09:44:34.738195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.738199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.193 [2024-11-19 09:44:34.738223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.738229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.738233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.193 [2024-11-19 09:44:34.738241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.193 [2024-11-19 09:44:34.738260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.193 [2024-11-19 09:44:34.738308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.193 [2024-11-19 09:44:34.738315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.193 [2024-11-19 09:44:34.738318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.193 [2024-11-19 09:44:34.738323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.193 [2024-11-19 09:44:34.738333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.194 [2024-11-19 09:44:34.738349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.194 [2024-11-19 09:44:34.738366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.194 [2024-11-19 09:44:34.738416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.194 [2024-11-19 09:44:34.738423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.194 [2024-11-19 09:44:34.738427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.194 [2024-11-19 09:44:34.738441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738450] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.194 [2024-11-19 09:44:34.738457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.194 [2024-11-19 09:44:34.738473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.194 [2024-11-19 09:44:34.738516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.194 [2024-11-19 09:44:34.738523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.194 [2024-11-19 09:44:34.738527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.194 [2024-11-19 09:44:34.738541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.194 [2024-11-19 09:44:34.738557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.194 [2024-11-19 09:44:34.738574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.194 [2024-11-19 09:44:34.738618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.194 [2024-11-19 09:44:34.738625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.194 [2024-11-19 09:44:34.738629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.194 [2024-11-19 09:44:34.738643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.194 [2024-11-19 09:44:34.738659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.194 [2024-11-19 09:44:34.738676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.194 [2024-11-19 09:44:34.738724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.194 [2024-11-19 09:44:34.738731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.194 [2024-11-19 09:44:34.738735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.194 [2024-11-19 09:44:34.738749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.194 [2024-11-19 09:44:34.738765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.194 [2024-11-19 09:44:34.738781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.194 [2024-11-19 09:44:34.738826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.194 [2024-11-19 09:44:34.738832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.194 [2024-11-19 09:44:34.738837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.194 [2024-11-19 09:44:34.738851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.194 [2024-11-19 09:44:34.738867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.194 [2024-11-19 09:44:34.738883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.194 [2024-11-19 09:44:34.738931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.194 [2024-11-19 09:44:34.738938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.194 [2024-11-19 09:44:34.738942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.194 [2024-11-19 09:44:34.738956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.738965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.194 [2024-11-19 09:44:34.738972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.194 [2024-11-19 09:44:34.738988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.194 [2024-11-19 09:44:34.739030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.194 [2024-11-19 09:44:34.739037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.194 [2024-11-19 09:44:34.739041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.194 [2024-11-19 09:44:34.739055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.194 [2024-11-19 09:44:34.739071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.194 [2024-11-19 09:44:34.739087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.194 [2024-11-19 09:44:34.739143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.194 [2024-11-19 09:44:34.739151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.194 [2024-11-19 09:44:34.739155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.194 [2024-11-19 09:44:34.739170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.194 [2024-11-19 09:44:34.739186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.194 [2024-11-19 09:44:34.739205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.194 [2024-11-19 09:44:34.739261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.194 [2024-11-19 09:44:34.739268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.194 [2024-11-19 09:44:34.739272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739276] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.194 [2024-11-19 09:44:34.739287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.194 [2024-11-19 09:44:34.739303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.194 [2024-11-19 09:44:34.739322] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.194 [2024-11-19 09:44:34.739364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.194 [2024-11-19 09:44:34.739371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.194 [2024-11-19 09:44:34.739375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.194 [2024-11-19 09:44:34.739389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.194 [2024-11-19 09:44:34.739405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.194 [2024-11-19 09:44:34.739422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.194 [2024-11-19 09:44:34.739467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.194 [2024-11-19 09:44:34.739473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.194 [2024-11-19 09:44:34.739477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.194 [2024-11-19 09:44:34.739491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.194 [2024-11-19 09:44:34.739500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.194 [2024-11-19 09:44:34.739507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.194 [2024-11-19 09:44:34.739523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.194 [2024-11-19 09:44:34.739574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.195 [2024-11-19 09:44:34.739581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.195 [2024-11-19 09:44:34.739584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.739588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.195 [2024-11-19 09:44:34.739599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.739603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.739607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.195 [2024-11-19 09:44:34.739614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.195 [2024-11-19 09:44:34.739631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.195 [2024-11-19 09:44:34.739675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.195 [2024-11-19 09:44:34.739682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.195 [2024-11-19 09:44:34.739685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.739690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.195 [2024-11-19 09:44:34.739700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.739705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.739709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.195 [2024-11-19 09:44:34.739716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.195 [2024-11-19 09:44:34.739732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.195 [2024-11-19 09:44:34.739777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.195 [2024-11-19 09:44:34.739784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.195 [2024-11-19 09:44:34.739787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.739792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.195 [2024-11-19 09:44:34.739802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.739806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.739810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.195 [2024-11-19 09:44:34.739817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.195 [2024-11-19 09:44:34.739833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.195 [2024-11-19 09:44:34.739879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.195 [2024-11-19 09:44:34.739886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.195 [2024-11-19 09:44:34.739889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.739893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.195 [2024-11-19 09:44:34.739904] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.739908] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.739912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.195 [2024-11-19 09:44:34.739919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.195 [2024-11-19 09:44:34.739935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.195 [2024-11-19 09:44:34.739984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.195 [2024-11-19 09:44:34.739991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.195 [2024-11-19 09:44:34.739995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.739999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.195 [2024-11-19 09:44:34.740009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.740014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.740018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.195 [2024-11-19 09:44:34.740025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.195 [2024-11-19 09:44:34.740041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.195 [2024-11-19 09:44:34.740092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.195 [2024-11-19 09:44:34.740098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.195 [2024-11-19 09:44:34.740102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.740106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.195 [2024-11-19 09:44:34.740117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.740121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.740125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.195 [2024-11-19 09:44:34.740133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.195 [2024-11-19 09:44:34.740149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.195 [2024-11-19 09:44:34.740193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.195 [2024-11-19 09:44:34.740200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.195 [2024-11-19 09:44:34.740204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.744273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.195 [2024-11-19 09:44:34.744295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.744301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.744305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf35750) 00:16:47.195 [2024-11-19 09:44:34.744314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.195 [2024-11-19 09:44:34.744342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf99bc0, cid 3, qid 0 00:16:47.195 [2024-11-19 09:44:34.744392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:47.195 [2024-11-19 09:44:34.744399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:47.195 [2024-11-19 09:44:34.744402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:47.195 [2024-11-19 09:44:34.744407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf99bc0) on tqpair=0xf35750 00:16:47.195 [2024-11-19 09:44:34.744415] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:16:47.195 0% 00:16:47.195 Data Units Read: 0 00:16:47.195 Data Units Written: 0 00:16:47.195 Host Read Commands: 0 00:16:47.195 Host Write Commands: 0 00:16:47.195 Controller Busy Time: 0 minutes 00:16:47.195 Power Cycles: 0 00:16:47.195 Power On Hours: 0 hours 00:16:47.195 Unsafe Shutdowns: 0 00:16:47.195 Unrecoverable Media Errors: 0 00:16:47.195 Lifetime Error Log Entries: 0 00:16:47.195 Warning Temperature Time: 0 minutes 00:16:47.195 Critical Temperature Time: 0 minutes 00:16:47.195 00:16:47.195 Number of Queues 00:16:47.195 ================ 00:16:47.195 Number of I/O Submission Queues: 127 00:16:47.195 Number of I/O Completion Queues: 127 00:16:47.195 00:16:47.195 Active Namespaces 00:16:47.195 ================= 00:16:47.195 Namespace ID:1 00:16:47.195 Error Recovery Timeout: Unlimited 00:16:47.195 Command Set Identifier: NVM (00h) 00:16:47.195 Deallocate: Supported 00:16:47.195 Deallocated/Unwritten Error: Not Supported 00:16:47.195 Deallocated Read Value: Unknown 00:16:47.195 Deallocate in Write Zeroes: Not Supported 00:16:47.195 Deallocated Guard Field: 0xFFFF 00:16:47.195 Flush: Supported 00:16:47.195 Reservation: Supported 00:16:47.195 Namespace Sharing Capabilities: Multiple Controllers 00:16:47.195 Size (in LBAs): 131072 (0GiB) 00:16:47.195 Capacity (in LBAs): 131072 (0GiB) 00:16:47.195 Utilization (in LBAs): 131072 (0GiB) 00:16:47.195 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:47.195 EUI64: ABCDEF0123456789 00:16:47.195 UUID: c9ff6d93-e070-4efb-aef0-051de82b3462 00:16:47.195 Thin Provisioning: Not Supported 00:16:47.195 Per-NS Atomic Units: Yes 00:16:47.195 Atomic Boundary Size (Normal): 0 00:16:47.195 Atomic Boundary Size (PFail): 0 00:16:47.195 Atomic Boundary Offset: 0 00:16:47.195 Maximum Single Source Range Length: 65535 00:16:47.195 Maximum Copy Length: 65535 00:16:47.195 Maximum Source Range Count: 1 00:16:47.195 NGUID/EUI64 Never Reused: No 00:16:47.195 Namespace Write Protected: No 00:16:47.195 Number of LBA Formats: 1 00:16:47.195 Current LBA Format: LBA Format #00 00:16:47.195 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:47.195 00:16:47.195 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:47.453 rmmod nvme_tcp 00:16:47.453 rmmod nvme_fabrics 00:16:47.453 rmmod nvme_keyring 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74246 ']' 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74246 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74246 ']' 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74246 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74246 00:16:47.453 killing process with pid 74246 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74246' 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74246 00:16:47.453 09:44:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74246 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:47.712 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:16:47.970 00:16:47.970 real 0m2.933s 00:16:47.970 user 0m7.416s 00:16:47.970 sys 0m0.775s 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:47.970 ************************************ 00:16:47.970 END TEST nvmf_identify 00:16:47.970 ************************************ 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.970 ************************************ 00:16:47.970 START TEST nvmf_perf 00:16:47.970 ************************************ 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:47.970 * Looking for test storage... 00:16:47.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:47.970 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:48.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.231 --rc genhtml_branch_coverage=1 00:16:48.231 --rc genhtml_function_coverage=1 00:16:48.231 --rc genhtml_legend=1 00:16:48.231 --rc geninfo_all_blocks=1 00:16:48.231 --rc geninfo_unexecuted_blocks=1 00:16:48.231 00:16:48.231 ' 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:48.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.231 --rc genhtml_branch_coverage=1 00:16:48.231 --rc genhtml_function_coverage=1 00:16:48.231 --rc genhtml_legend=1 00:16:48.231 --rc geninfo_all_blocks=1 00:16:48.231 --rc geninfo_unexecuted_blocks=1 00:16:48.231 00:16:48.231 ' 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:48.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.231 --rc genhtml_branch_coverage=1 00:16:48.231 --rc genhtml_function_coverage=1 00:16:48.231 --rc genhtml_legend=1 00:16:48.231 --rc geninfo_all_blocks=1 00:16:48.231 --rc geninfo_unexecuted_blocks=1 00:16:48.231 00:16:48.231 ' 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:48.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.231 --rc genhtml_branch_coverage=1 00:16:48.231 --rc genhtml_function_coverage=1 00:16:48.231 --rc genhtml_legend=1 00:16:48.231 --rc geninfo_all_blocks=1 00:16:48.231 --rc geninfo_unexecuted_blocks=1 00:16:48.231 00:16:48.231 ' 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.231 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:48.232 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:48.232 Cannot find device "nvmf_init_br" 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:48.232 Cannot find device "nvmf_init_br2" 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:48.232 Cannot find device "nvmf_tgt_br" 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:48.232 Cannot find device "nvmf_tgt_br2" 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:48.232 Cannot find device "nvmf_init_br" 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:48.232 Cannot find device "nvmf_init_br2" 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:48.232 Cannot find device "nvmf_tgt_br" 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:48.232 Cannot find device "nvmf_tgt_br2" 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:48.232 Cannot find device "nvmf_br" 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:48.232 Cannot find device "nvmf_init_if" 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:48.232 Cannot find device "nvmf_init_if2" 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:48.232 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:48.491 09:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:48.491 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:48.491 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:16:48.491 00:16:48.491 --- 10.0.0.3 ping statistics --- 00:16:48.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.491 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:48.491 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:48.491 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:16:48.491 00:16:48.491 --- 10.0.0.4 ping statistics --- 00:16:48.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.491 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:48.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:48.491 00:16:48.491 --- 10.0.0.1 ping statistics --- 00:16:48.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.491 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:48.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:16:48.491 00:16:48.491 --- 10.0.0.2 ping statistics --- 00:16:48.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.491 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74509 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74509 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74509 ']' 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.491 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:48.749 [2024-11-19 09:44:36.158397] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:16:48.749 [2024-11-19 09:44:36.158502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.749 [2024-11-19 09:44:36.313405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:49.007 [2024-11-19 09:44:36.384459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.007 [2024-11-19 09:44:36.384524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.007 [2024-11-19 09:44:36.384538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.007 [2024-11-19 09:44:36.384548] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.007 [2024-11-19 09:44:36.384557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.007 [2024-11-19 09:44:36.385831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.007 [2024-11-19 09:44:36.385973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.007 [2024-11-19 09:44:36.386055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.007 [2024-11-19 09:44:36.386056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.007 [2024-11-19 09:44:36.443785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:49.007 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.007 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:16:49.007 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:49.007 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:49.007 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:49.007 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.007 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:49.007 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:49.573 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:49.573 09:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:49.932 09:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:49.932 09:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.190 09:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:50.190 09:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:50.190 09:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:50.191 09:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:50.191 09:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:50.191 [2024-11-19 09:44:37.805877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.449 09:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:50.708 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:50.708 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:50.966 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:50.966 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:51.224 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:51.487 [2024-11-19 09:44:38.883158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:51.487 09:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:51.752 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:51.752 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:51.752 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:51.752 09:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:52.687 Initializing NVMe Controllers 00:16:52.687 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:52.687 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:52.687 Initialization complete. Launching workers. 00:16:52.687 ======================================================== 00:16:52.687 Latency(us) 00:16:52.687 Device Information : IOPS MiB/s Average min max 00:16:52.687 PCIE (0000:00:10.0) NSID 1 from core 0: 25699.00 100.39 1244.36 339.42 5433.34 00:16:52.687 ======================================================== 00:16:52.687 Total : 25699.00 100.39 1244.36 339.42 5433.34 00:16:52.687 00:16:52.687 09:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:54.063 Initializing NVMe Controllers 00:16:54.063 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:54.063 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:54.063 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:54.063 Initialization complete. Launching workers. 00:16:54.063 ======================================================== 00:16:54.063 Latency(us) 00:16:54.063 Device Information : IOPS MiB/s Average min max 00:16:54.063 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3677.00 14.36 271.62 103.62 6107.57 00:16:54.063 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8112.21 6026.14 12032.41 00:16:54.063 ======================================================== 00:16:54.063 Total : 3801.00 14.85 527.40 103.62 12032.41 00:16:54.063 00:16:54.063 09:44:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:55.436 Initializing NVMe Controllers 00:16:55.436 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:55.436 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:55.436 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:55.436 Initialization complete. Launching workers. 00:16:55.436 ======================================================== 00:16:55.436 Latency(us) 00:16:55.436 Device Information : IOPS MiB/s Average min max 00:16:55.436 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8709.67 34.02 3686.16 592.02 8547.48 00:16:55.436 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3923.95 15.33 8192.46 5901.17 17271.02 00:16:55.436 ======================================================== 00:16:55.436 Total : 12633.61 49.35 5085.79 592.02 17271.02 00:16:55.436 00:16:55.694 09:44:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:55.694 09:44:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:58.224 Initializing NVMe Controllers 00:16:58.224 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:58.224 Controller IO queue size 128, less than required. 00:16:58.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:58.224 Controller IO queue size 128, less than required. 00:16:58.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:58.224 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:58.224 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:58.224 Initialization complete. Launching workers. 00:16:58.224 ======================================================== 00:16:58.224 Latency(us) 00:16:58.224 Device Information : IOPS MiB/s Average min max 00:16:58.224 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1715.09 428.77 76492.75 35053.19 155702.48 00:16:58.224 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 615.35 153.84 215306.37 69551.58 365417.78 00:16:58.224 ======================================================== 00:16:58.224 Total : 2330.45 582.61 113146.45 35053.19 365417.78 00:16:58.224 00:16:58.224 09:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:16:58.483 Initializing NVMe Controllers 00:16:58.483 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:58.483 Controller IO queue size 128, less than required. 00:16:58.483 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:58.483 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:58.483 Controller IO queue size 128, less than required. 00:16:58.483 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:58.483 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:58.483 WARNING: Some requested NVMe devices were skipped 00:16:58.483 No valid NVMe controllers or AIO or URING devices found 00:16:58.483 09:44:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:17:01.090 Initializing NVMe Controllers 00:17:01.090 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:01.090 Controller IO queue size 128, less than required. 00:17:01.090 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:01.090 Controller IO queue size 128, less than required. 00:17:01.090 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:01.090 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:01.090 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:01.090 Initialization complete. Launching workers. 00:17:01.090 00:17:01.090 ==================== 00:17:01.090 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:01.090 TCP transport: 00:17:01.090 polls: 9016 00:17:01.090 idle_polls: 5557 00:17:01.090 sock_completions: 3459 00:17:01.090 nvme_completions: 6215 00:17:01.090 submitted_requests: 9344 00:17:01.090 queued_requests: 1 00:17:01.090 00:17:01.090 ==================== 00:17:01.090 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:01.090 TCP transport: 00:17:01.090 polls: 11524 00:17:01.090 idle_polls: 7742 00:17:01.090 sock_completions: 3782 00:17:01.090 nvme_completions: 6435 00:17:01.090 submitted_requests: 9662 00:17:01.090 queued_requests: 1 00:17:01.090 ======================================================== 00:17:01.090 Latency(us) 00:17:01.090 Device Information : IOPS MiB/s Average min max 00:17:01.090 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1550.43 387.61 84776.83 36776.06 143212.30 00:17:01.090 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1605.32 401.33 80492.60 37889.28 130958.03 00:17:01.090 ======================================================== 00:17:01.090 Total : 3155.74 788.94 82597.45 36776.06 143212.30 00:17:01.090 00:17:01.090 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:01.090 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:01.350 rmmod nvme_tcp 00:17:01.350 rmmod nvme_fabrics 00:17:01.350 rmmod nvme_keyring 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74509 ']' 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74509 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74509 ']' 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74509 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74509 00:17:01.350 killing process with pid 74509 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74509' 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74509 00:17:01.350 09:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74509 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.287 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.546 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:17:02.546 00:17:02.546 real 0m14.472s 00:17:02.546 user 0m52.013s 00:17:02.546 sys 0m4.095s 00:17:02.546 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.546 ************************************ 00:17:02.546 END TEST nvmf_perf 00:17:02.546 ************************************ 00:17:02.546 09:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:02.547 09:44:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:02.547 09:44:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:02.547 09:44:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.547 09:44:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.547 ************************************ 00:17:02.547 START TEST nvmf_fio_host 00:17:02.547 ************************************ 00:17:02.547 09:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:02.547 * Looking for test storage... 00:17:02.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:02.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.547 --rc genhtml_branch_coverage=1 00:17:02.547 --rc genhtml_function_coverage=1 00:17:02.547 --rc genhtml_legend=1 00:17:02.547 --rc geninfo_all_blocks=1 00:17:02.547 --rc geninfo_unexecuted_blocks=1 00:17:02.547 00:17:02.547 ' 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:02.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.547 --rc genhtml_branch_coverage=1 00:17:02.547 --rc genhtml_function_coverage=1 00:17:02.547 --rc genhtml_legend=1 00:17:02.547 --rc geninfo_all_blocks=1 00:17:02.547 --rc geninfo_unexecuted_blocks=1 00:17:02.547 00:17:02.547 ' 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:02.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.547 --rc genhtml_branch_coverage=1 00:17:02.547 --rc genhtml_function_coverage=1 00:17:02.547 --rc genhtml_legend=1 00:17:02.547 --rc geninfo_all_blocks=1 00:17:02.547 --rc geninfo_unexecuted_blocks=1 00:17:02.547 00:17:02.547 ' 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:02.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.547 --rc genhtml_branch_coverage=1 00:17:02.547 --rc genhtml_function_coverage=1 00:17:02.547 --rc genhtml_legend=1 00:17:02.547 --rc geninfo_all_blocks=1 00:17:02.547 --rc geninfo_unexecuted_blocks=1 00:17:02.547 00:17:02.547 ' 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.547 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.548 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.548 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.548 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.548 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:02.809 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:02.809 Cannot find device "nvmf_init_br" 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:02.809 Cannot find device "nvmf_init_br2" 00:17:02.809 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:02.810 Cannot find device "nvmf_tgt_br" 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:02.810 Cannot find device "nvmf_tgt_br2" 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:02.810 Cannot find device "nvmf_init_br" 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:02.810 Cannot find device "nvmf_init_br2" 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:02.810 Cannot find device "nvmf_tgt_br" 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:02.810 Cannot find device "nvmf_tgt_br2" 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:02.810 Cannot find device "nvmf_br" 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:02.810 Cannot find device "nvmf_init_if" 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:02.810 Cannot find device "nvmf_init_if2" 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:02.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:02.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:02.810 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:03.069 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:03.069 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:17:03.069 00:17:03.069 --- 10.0.0.3 ping statistics --- 00:17:03.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.069 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:03.069 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:03.069 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:17:03.069 00:17:03.069 --- 10.0.0.4 ping statistics --- 00:17:03.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.069 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:03.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:03.069 00:17:03.069 --- 10.0.0.1 ping statistics --- 00:17:03.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.069 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:03.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:03.069 00:17:03.069 --- 10.0.0.2 ping statistics --- 00:17:03.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.069 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74962 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74962 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74962 ']' 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.069 09:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.069 [2024-11-19 09:44:50.641491] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:17:03.069 [2024-11-19 09:44:50.641599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.328 [2024-11-19 09:44:50.793894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.328 [2024-11-19 09:44:50.857469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.328 [2024-11-19 09:44:50.857514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.328 [2024-11-19 09:44:50.857526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.328 [2024-11-19 09:44:50.857534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.328 [2024-11-19 09:44:50.857542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.328 [2024-11-19 09:44:50.858746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.328 [2024-11-19 09:44:50.858814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.328 [2024-11-19 09:44:50.858908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.328 [2024-11-19 09:44:50.858918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.328 [2024-11-19 09:44:50.932303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:04.261 09:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.261 09:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:17:04.261 09:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:04.519 [2024-11-19 09:44:51.925174] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.519 09:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:04.519 09:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:04.519 09:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.519 09:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:04.777 Malloc1 00:17:04.777 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:05.035 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:05.293 09:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:05.552 [2024-11-19 09:44:53.019819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:05.552 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:05.811 09:44:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:06.069 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:06.069 fio-3.35 00:17:06.069 Starting 1 thread 00:17:08.599 00:17:08.599 test: (groupid=0, jobs=1): err= 0: pid=75045: Tue Nov 19 09:44:55 2024 00:17:08.599 read: IOPS=8371, BW=32.7MiB/s (34.3MB/s)(65.6MiB/2006msec) 00:17:08.599 slat (nsec): min=1972, max=348614, avg=2581.02, stdev=3508.84 00:17:08.599 clat (usec): min=2564, max=17698, avg=7972.33, stdev=1205.93 00:17:08.599 lat (usec): min=2615, max=17700, avg=7974.92, stdev=1205.77 00:17:08.599 clat percentiles (usec): 00:17:08.599 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7046], 20.00th=[ 7308], 00:17:08.599 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:17:08.599 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 9372], 95.00th=[10421], 00:17:08.599 | 99.00th=[12649], 99.50th=[13960], 99.90th=[16450], 99.95th=[17171], 00:17:08.599 | 99.99th=[17695] 00:17:08.599 bw ( KiB/s): min=29536, max=35920, per=99.86%, avg=33440.00, stdev=2752.30, samples=4 00:17:08.599 iops : min= 7384, max= 8980, avg=8360.00, stdev=688.08, samples=4 00:17:08.599 write: IOPS=8366, BW=32.7MiB/s (34.3MB/s)(65.6MiB/2006msec); 0 zone resets 00:17:08.599 slat (usec): min=2, max=237, avg= 2.66, stdev= 2.33 00:17:08.599 clat (usec): min=2424, max=16337, avg=7257.68, stdev=1066.32 00:17:08.599 lat (usec): min=2439, max=16340, avg=7260.35, stdev=1066.23 00:17:08.599 clat percentiles (usec): 00:17:08.599 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6652], 00:17:08.599 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7111], 00:17:08.599 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 8455], 95.00th=[ 9503], 00:17:08.599 | 99.00th=[11207], 99.50th=[12649], 99.90th=[15795], 99.95th=[16057], 00:17:08.599 | 99.99th=[16319] 00:17:08.599 bw ( KiB/s): min=29312, max=35008, per=99.98%, avg=33458.00, stdev=2766.99, samples=4 00:17:08.599 iops : min= 7328, max= 8752, avg=8364.50, stdev=691.75, samples=4 00:17:08.599 lat (msec) : 4=0.09%, 10=94.77%, 20=5.15% 00:17:08.599 cpu : usr=71.82%, sys=21.50%, ctx=22, majf=0, minf=7 00:17:08.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:08.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.600 issued rwts: total=16793,16783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.600 00:17:08.600 Run status group 0 (all jobs): 00:17:08.600 READ: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=65.6MiB (68.8MB), run=2006-2006msec 00:17:08.600 WRITE: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=65.6MiB (68.7MB), run=2006-2006msec 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:08.600 09:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:08.600 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:08.600 fio-3.35 00:17:08.600 Starting 1 thread 00:17:11.132 00:17:11.132 test: (groupid=0, jobs=1): err= 0: pid=75088: Tue Nov 19 09:44:58 2024 00:17:11.132 read: IOPS=8297, BW=130MiB/s (136MB/s)(260MiB/2005msec) 00:17:11.132 slat (usec): min=3, max=117, avg= 3.75, stdev= 1.89 00:17:11.132 clat (usec): min=1679, max=17876, avg=8625.66, stdev=2598.05 00:17:11.132 lat (usec): min=1682, max=17879, avg=8629.41, stdev=2598.08 00:17:11.132 clat percentiles (usec): 00:17:11.132 | 1.00th=[ 4015], 5.00th=[ 4883], 10.00th=[ 5538], 20.00th=[ 6390], 00:17:11.132 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8291], 60.00th=[ 8979], 00:17:11.132 | 70.00th=[ 9896], 80.00th=[10814], 90.00th=[11994], 95.00th=[13173], 00:17:11.132 | 99.00th=[16319], 99.50th=[16712], 99.90th=[17433], 99.95th=[17695], 00:17:11.132 | 99.99th=[17957] 00:17:11.132 bw ( KiB/s): min=59136, max=73312, per=50.23%, avg=66680.00, stdev=6615.54, samples=4 00:17:11.132 iops : min= 3696, max= 4582, avg=4167.50, stdev=413.47, samples=4 00:17:11.132 write: IOPS=4743, BW=74.1MiB/s (77.7MB/s)(137MiB/1843msec); 0 zone resets 00:17:11.132 slat (usec): min=33, max=364, avg=38.75, stdev= 8.01 00:17:11.132 clat (usec): min=5531, max=20409, avg=12071.90, stdev=2146.02 00:17:11.132 lat (usec): min=5576, max=20446, avg=12110.65, stdev=2146.53 00:17:11.132 clat percentiles (usec): 00:17:11.132 | 1.00th=[ 8029], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10290], 00:17:11.132 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11731], 60.00th=[12387], 00:17:11.132 | 70.00th=[13042], 80.00th=[13960], 90.00th=[15139], 95.00th=[15926], 00:17:11.132 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18744], 99.95th=[19006], 00:17:11.132 | 99.99th=[20317] 00:17:11.132 bw ( KiB/s): min=61216, max=76096, per=91.51%, avg=69456.00, stdev=7065.58, samples=4 00:17:11.132 iops : min= 3826, max= 4756, avg=4341.00, stdev=441.60, samples=4 00:17:11.132 lat (msec) : 2=0.04%, 4=0.61%, 10=51.52%, 20=47.81%, 50=0.01% 00:17:11.132 cpu : usr=84.38%, sys=11.73%, ctx=7, majf=0, minf=12 00:17:11.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:11.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:11.132 issued rwts: total=16636,8743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.132 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:11.132 00:17:11.132 Run status group 0 (all jobs): 00:17:11.132 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=260MiB (273MB), run=2005-2005msec 00:17:11.132 WRITE: bw=74.1MiB/s (77.7MB/s), 74.1MiB/s-74.1MiB/s (77.7MB/s-77.7MB/s), io=137MiB (143MB), run=1843-1843msec 00:17:11.132 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:11.132 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:11.132 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:11.132 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:11.133 rmmod nvme_tcp 00:17:11.133 rmmod nvme_fabrics 00:17:11.133 rmmod nvme_keyring 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74962 ']' 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74962 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74962 ']' 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74962 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.133 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74962 00:17:11.391 killing process with pid 74962 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74962' 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74962 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74962 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:11.391 09:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:17:11.650 ************************************ 00:17:11.650 END TEST nvmf_fio_host 00:17:11.650 ************************************ 00:17:11.650 00:17:11.650 real 0m9.238s 00:17:11.650 user 0m37.008s 00:17:11.650 sys 0m2.428s 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.650 09:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.909 ************************************ 00:17:11.909 START TEST nvmf_failover 00:17:11.909 ************************************ 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:11.909 * Looking for test storage... 00:17:11.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:11.909 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:11.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.910 --rc genhtml_branch_coverage=1 00:17:11.910 --rc genhtml_function_coverage=1 00:17:11.910 --rc genhtml_legend=1 00:17:11.910 --rc geninfo_all_blocks=1 00:17:11.910 --rc geninfo_unexecuted_blocks=1 00:17:11.910 00:17:11.910 ' 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:11.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.910 --rc genhtml_branch_coverage=1 00:17:11.910 --rc genhtml_function_coverage=1 00:17:11.910 --rc genhtml_legend=1 00:17:11.910 --rc geninfo_all_blocks=1 00:17:11.910 --rc geninfo_unexecuted_blocks=1 00:17:11.910 00:17:11.910 ' 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:11.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.910 --rc genhtml_branch_coverage=1 00:17:11.910 --rc genhtml_function_coverage=1 00:17:11.910 --rc genhtml_legend=1 00:17:11.910 --rc geninfo_all_blocks=1 00:17:11.910 --rc geninfo_unexecuted_blocks=1 00:17:11.910 00:17:11.910 ' 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:11.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.910 --rc genhtml_branch_coverage=1 00:17:11.910 --rc genhtml_function_coverage=1 00:17:11.910 --rc genhtml_legend=1 00:17:11.910 --rc geninfo_all_blocks=1 00:17:11.910 --rc geninfo_unexecuted_blocks=1 00:17:11.910 00:17:11.910 ' 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:11.910 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:11.910 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:11.911 Cannot find device "nvmf_init_br" 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:11.911 Cannot find device "nvmf_init_br2" 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:11.911 Cannot find device "nvmf_tgt_br" 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:17:11.911 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:12.169 Cannot find device "nvmf_tgt_br2" 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:12.169 Cannot find device "nvmf_init_br" 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:12.169 Cannot find device "nvmf_init_br2" 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:12.169 Cannot find device "nvmf_tgt_br" 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:12.169 Cannot find device "nvmf_tgt_br2" 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:12.169 Cannot find device "nvmf_br" 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:12.169 Cannot find device "nvmf_init_if" 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:12.169 Cannot find device "nvmf_init_if2" 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:12.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:12.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:12.169 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:12.428 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:12.428 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:12.428 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:12.428 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:12.428 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:12.428 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:12.428 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:12.428 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:12.428 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:12.428 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:12.428 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:17:12.428 00:17:12.428 --- 10.0.0.3 ping statistics --- 00:17:12.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.428 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:12.428 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:12.428 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:12.428 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.113 ms 00:17:12.428 00:17:12.428 --- 10.0.0.4 ping statistics --- 00:17:12.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.428 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:12.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:17:12.429 00:17:12.429 --- 10.0.0.1 ping statistics --- 00:17:12.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.429 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:12.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:12.429 00:17:12.429 --- 10.0.0.2 ping statistics --- 00:17:12.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.429 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:12.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75352 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75352 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75352 ']' 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.429 09:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:12.429 [2024-11-19 09:44:59.949568] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:17:12.429 [2024-11-19 09:44:59.949923] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.686 [2024-11-19 09:45:00.103010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:12.686 [2024-11-19 09:45:00.171407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.686 [2024-11-19 09:45:00.171491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.686 [2024-11-19 09:45:00.171517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.686 [2024-11-19 09:45:00.171529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.686 [2024-11-19 09:45:00.171538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.686 [2024-11-19 09:45:00.172808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.686 [2024-11-19 09:45:00.176299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.686 [2024-11-19 09:45:00.176326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.686 [2024-11-19 09:45:00.234552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:12.686 09:45:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.686 09:45:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:12.686 09:45:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.686 09:45:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:12.687 09:45:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:12.944 09:45:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.944 09:45:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:13.202 [2024-11-19 09:45:00.609779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.202 09:45:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:13.460 Malloc0 00:17:13.460 09:45:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:13.719 09:45:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:13.976 09:45:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:14.235 [2024-11-19 09:45:01.832964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:14.235 09:45:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:14.492 [2024-11-19 09:45:02.085151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:14.493 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:17:14.751 [2024-11-19 09:45:02.337387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:17:14.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:14.751 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75406 00:17:14.751 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:14.751 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:14.751 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75406 /var/tmp/bdevperf.sock 00:17:14.751 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75406 ']' 00:17:14.751 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:14.751 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:14.751 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:14.751 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:14.751 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:15.317 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.317 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:15.317 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:15.575 NVMe0n1 00:17:15.575 09:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:15.833 00:17:15.833 09:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75418 00:17:15.833 09:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:15.833 09:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:17.209 09:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:17.209 [2024-11-19 09:45:04.724185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.209 [2024-11-19 09:45:04.725739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.725994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 [2024-11-19 09:45:04.726310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aecf0 is same with the state(6) to be set 00:17:17.210 09:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:20.490 09:45:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:20.490 00:17:20.748 09:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:21.006 09:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:24.289 09:45:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:24.289 [2024-11-19 09:45:11.715163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:24.289 09:45:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:17:25.227 09:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:17:25.485 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75418 00:17:32.058 { 00:17:32.058 "results": [ 00:17:32.058 { 00:17:32.058 "job": "NVMe0n1", 00:17:32.058 "core_mask": "0x1", 00:17:32.058 "workload": "verify", 00:17:32.058 "status": "finished", 00:17:32.058 "verify_range": { 00:17:32.058 "start": 0, 00:17:32.058 "length": 16384 00:17:32.058 }, 00:17:32.058 "queue_depth": 128, 00:17:32.058 "io_size": 4096, 00:17:32.058 "runtime": 15.010035, 00:17:32.058 "iops": 8884.722787122082, 00:17:32.058 "mibps": 34.70594838719563, 00:17:32.058 "io_failed": 3541, 00:17:32.058 "io_timeout": 0, 00:17:32.058 "avg_latency_us": 14001.559555006903, 00:17:32.058 "min_latency_us": 662.8072727272727, 00:17:32.058 "max_latency_us": 19065.01818181818 00:17:32.058 } 00:17:32.058 ], 00:17:32.058 "core_count": 1 00:17:32.058 } 00:17:32.058 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75406 00:17:32.058 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75406 ']' 00:17:32.058 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75406 00:17:32.058 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:32.058 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.058 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75406 00:17:32.058 killing process with pid 75406 00:17:32.058 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:32.058 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:32.058 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75406' 00:17:32.059 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75406 00:17:32.059 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75406 00:17:32.059 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:32.059 [2024-11-19 09:45:02.401967] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:17:32.059 [2024-11-19 09:45:02.402077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75406 ] 00:17:32.059 [2024-11-19 09:45:02.548466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.059 [2024-11-19 09:45:02.614975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.059 [2024-11-19 09:45:02.671741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:32.059 Running I/O for 15 seconds... 00:17:32.059 6932.00 IOPS, 27.08 MiB/s [2024-11-19T09:45:19.682Z] [2024-11-19 09:45:04.726384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.726975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.726991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.059 [2024-11-19 09:45:04.727615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.059 [2024-11-19 09:45:04.727631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.727653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.727670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.727684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.727700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.727714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.727729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.727743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.727758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.727772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.727788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.727801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.727817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.727830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.727846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.727865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.727881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.727895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.727910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.727924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.727940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.727954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.727970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.727988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.060 [2024-11-19 09:45:04.728879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.060 [2024-11-19 09:45:04.728892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.728908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.728922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.728938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.728951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.728966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.728980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.728996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.061 [2024-11-19 09:45:04.729983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.729999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.061 [2024-11-19 09:45:04.730024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.730041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.061 [2024-11-19 09:45:04.730055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.061 [2024-11-19 09:45:04.730071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:04.730085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:04.730114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:04.730142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:04.730171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:04.730200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:04.730255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:04.730283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:04.730311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:04.730339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:04.730367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:04.730396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:04.730432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:04.730461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.062 [2024-11-19 09:45:04.730489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d070 is same with the state(6) to be set 00:17:32.062 [2024-11-19 09:45:04.730520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.062 [2024-11-19 09:45:04.730530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.062 [2024-11-19 09:45:04.730545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63648 len:8 PRP1 0x0 PRP2 0x0 00:17:32.062 [2024-11-19 09:45:04.730559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730630] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:17:32.062 [2024-11-19 09:45:04.730685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.062 [2024-11-19 09:45:04.730706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.062 [2024-11-19 09:45:04.730735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.062 [2024-11-19 09:45:04.730762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.062 [2024-11-19 09:45:04.730788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:04.730802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:32.062 [2024-11-19 09:45:04.734779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:32.062 [2024-11-19 09:45:04.734817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b2710 (9): Bad file descriptor 00:17:32.062 [2024-11-19 09:45:04.766331] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:17:32.062 7621.50 IOPS, 29.77 MiB/s [2024-11-19T09:45:19.685Z] 8150.33 IOPS, 31.84 MiB/s [2024-11-19T09:45:19.685Z] 8418.75 IOPS, 32.89 MiB/s [2024-11-19T09:45:19.685Z] [2024-11-19 09:45:08.409474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:08.409553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.409622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:08.409641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.409657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:08.409672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.409688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:08.409702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.409717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:08.409731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.409747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:08.409761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.409777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:08.409790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.409806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.062 [2024-11-19 09:45:08.409820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.409836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.062 [2024-11-19 09:45:08.409850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.409866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.062 [2024-11-19 09:45:08.409879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.409895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.062 [2024-11-19 09:45:08.409909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.409924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.062 [2024-11-19 09:45:08.409946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.409961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.062 [2024-11-19 09:45:08.409975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.409991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.062 [2024-11-19 09:45:08.410023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.410040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.062 [2024-11-19 09:45:08.410054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.410070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.062 [2024-11-19 09:45:08.410084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.062 [2024-11-19 09:45:08.410099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.063 [2024-11-19 09:45:08.410595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.410624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.410655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.410685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.410715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.410750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.410779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.410815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.410846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.410875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.410904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.410933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.410971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.410986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.411000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.411015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.411029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.411045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.411058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.411075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.411089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.411115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.411141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.411157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.411172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.063 [2024-11-19 09:45:08.411187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.063 [2024-11-19 09:45:08.411201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.411408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.411437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.411467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.411496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.411525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.411555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.411585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.411615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.411976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.411991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.412013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.412029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.412043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.412059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.412073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.412089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.064 [2024-11-19 09:45:08.412103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.412119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.412133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.412149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.412164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.412180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.412193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.412219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.412235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.412251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.412265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.412281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.412294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.412310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.412324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.412340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.412353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.412369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.412383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.412398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.412422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.064 [2024-11-19 09:45:08.412440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.064 [2024-11-19 09:45:08.412455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.412484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.412522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.412552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.412581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.412610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.065 [2024-11-19 09:45:08.412640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.065 [2024-11-19 09:45:08.412677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.065 [2024-11-19 09:45:08.412707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.065 [2024-11-19 09:45:08.412737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.065 [2024-11-19 09:45:08.412766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.065 [2024-11-19 09:45:08.412795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.065 [2024-11-19 09:45:08.412833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.065 [2024-11-19 09:45:08.412863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.412900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.412930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.412959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.412974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.412988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.413023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.413052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.413082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.413111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.413140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.413176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.413221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.413253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.413283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.413313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.065 [2024-11-19 09:45:08.413342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1651030 is same with the state(6) to be set 00:17:32.065 [2024-11-19 09:45:08.413374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.065 [2024-11-19 09:45:08.413385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.065 [2024-11-19 09:45:08.413395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78592 len:8 PRP1 0x0 PRP2 0x0 00:17:32.065 [2024-11-19 09:45:08.413409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.065 [2024-11-19 09:45:08.413433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.065 [2024-11-19 09:45:08.413443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:17:32.065 [2024-11-19 09:45:08.413457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.065 [2024-11-19 09:45:08.413480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.065 [2024-11-19 09:45:08.413495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:17:32.065 [2024-11-19 09:45:08.413509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.065 [2024-11-19 09:45:08.413532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.065 [2024-11-19 09:45:08.413543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:17:32.065 [2024-11-19 09:45:08.413556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.065 [2024-11-19 09:45:08.413579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.065 [2024-11-19 09:45:08.413589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:17:32.065 [2024-11-19 09:45:08.413615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.065 [2024-11-19 09:45:08.413640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.065 [2024-11-19 09:45:08.413650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:17:32.065 [2024-11-19 09:45:08.413664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.065 [2024-11-19 09:45:08.413677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.065 [2024-11-19 09:45:08.413687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.066 [2024-11-19 09:45:08.413697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:17:32.066 [2024-11-19 09:45:08.413710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:08.413724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.066 [2024-11-19 09:45:08.413733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.066 [2024-11-19 09:45:08.413744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:17:32.066 [2024-11-19 09:45:08.413757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:08.413771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.066 [2024-11-19 09:45:08.413780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.066 [2024-11-19 09:45:08.413791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:17:32.066 [2024-11-19 09:45:08.413804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:08.413865] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:17:32.066 [2024-11-19 09:45:08.413922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.066 [2024-11-19 09:45:08.413944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:08.413960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.066 [2024-11-19 09:45:08.413974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:08.413988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.066 [2024-11-19 09:45:08.414006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:08.414021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.066 [2024-11-19 09:45:08.414034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:08.414048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:17:32.066 [2024-11-19 09:45:08.417899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:32.066 [2024-11-19 09:45:08.417951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b2710 (9): Bad file descriptor 00:17:32.066 [2024-11-19 09:45:08.448749] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:17:32.066 8481.80 IOPS, 33.13 MiB/s [2024-11-19T09:45:19.689Z] 8630.83 IOPS, 33.71 MiB/s [2024-11-19T09:45:19.689Z] 8626.43 IOPS, 33.70 MiB/s [2024-11-19T09:45:19.689Z] 8718.12 IOPS, 34.06 MiB/s [2024-11-19T09:45:19.689Z] 8790.33 IOPS, 34.34 MiB/s [2024-11-19T09:45:19.689Z] [2024-11-19 09:45:13.024412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.024975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.066 [2024-11-19 09:45:13.024989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.025005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.066 [2024-11-19 09:45:13.025019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.025038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.066 [2024-11-19 09:45:13.025052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.025068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.066 [2024-11-19 09:45:13.025081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.025097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.066 [2024-11-19 09:45:13.025111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.025126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.066 [2024-11-19 09:45:13.025140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.025156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.066 [2024-11-19 09:45:13.025170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.025185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.066 [2024-11-19 09:45:13.025199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.025231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.066 [2024-11-19 09:45:13.025247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.025272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.066 [2024-11-19 09:45:13.025288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.025304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.066 [2024-11-19 09:45:13.025317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.025333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.066 [2024-11-19 09:45:13.025347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.066 [2024-11-19 09:45:13.025363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.066 [2024-11-19 09:45:13.025385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.025414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.025444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.025473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.025508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.025978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.067 [2024-11-19 09:45:13.025993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.026008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.026022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.026038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.026074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.026092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.026106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.026122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.026135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.026151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.026165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.026180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.026194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.026222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.026238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.026254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.026268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.026284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.026298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.026313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.026327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.026343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.026356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.026372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.026385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.026401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.026415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.067 [2024-11-19 09:45:13.026430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.067 [2024-11-19 09:45:13.026444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.026483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.026514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.026978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.026993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.068 [2024-11-19 09:45:13.027511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.027540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.027578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.027607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.027643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.027675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.068 [2024-11-19 09:45:13.027690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.068 [2024-11-19 09:45:13.027704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.027719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.069 [2024-11-19 09:45:13.027733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.027749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.069 [2024-11-19 09:45:13.027763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.027778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.069 [2024-11-19 09:45:13.027792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.027808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.069 [2024-11-19 09:45:13.027822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.027837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.069 [2024-11-19 09:45:13.027851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.027866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.069 [2024-11-19 09:45:13.027880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.027895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.069 [2024-11-19 09:45:13.027910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.027930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.069 [2024-11-19 09:45:13.027944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.027960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.069 [2024-11-19 09:45:13.027974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.027988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1651210 is same with the state(6) to be set 00:17:32.069 [2024-11-19 09:45:13.028005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31088 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31544 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31552 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31560 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31568 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31576 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31584 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31592 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31600 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31608 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31616 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31624 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31632 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31640 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31648 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31656 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.069 [2024-11-19 09:45:13.028845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.069 [2024-11-19 09:45:13.028856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31664 len:8 PRP1 0x0 PRP2 0x0 00:17:32.069 [2024-11-19 09:45:13.028873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.069 [2024-11-19 09:45:13.028936] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:17:32.069 [2024-11-19 09:45:13.028996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.070 [2024-11-19 09:45:13.029018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.070 [2024-11-19 09:45:13.029034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.070 [2024-11-19 09:45:13.029048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.070 [2024-11-19 09:45:13.029063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.070 [2024-11-19 09:45:13.029076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.070 [2024-11-19 09:45:13.029091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.070 [2024-11-19 09:45:13.029104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.070 [2024-11-19 09:45:13.029119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:17:32.070 [2024-11-19 09:45:13.032933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:17:32.070 [2024-11-19 09:45:13.032974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b2710 (9): Bad file descriptor 00:17:32.070 [2024-11-19 09:45:13.056932] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:17:32.070 8810.40 IOPS, 34.42 MiB/s [2024-11-19T09:45:19.693Z] 8856.00 IOPS, 34.59 MiB/s [2024-11-19T09:45:19.693Z] 8887.83 IOPS, 34.72 MiB/s [2024-11-19T09:45:19.693Z] 8890.77 IOPS, 34.73 MiB/s [2024-11-19T09:45:19.693Z] 8882.86 IOPS, 34.70 MiB/s [2024-11-19T09:45:19.693Z] 8885.87 IOPS, 34.71 MiB/s 00:17:32.070 Latency(us) 00:17:32.070 [2024-11-19T09:45:19.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.070 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:32.070 Verification LBA range: start 0x0 length 0x4000 00:17:32.070 NVMe0n1 : 15.01 8884.72 34.71 235.91 0.00 14001.56 662.81 19065.02 00:17:32.070 [2024-11-19T09:45:19.693Z] =================================================================================================================== 00:17:32.070 [2024-11-19T09:45:19.693Z] Total : 8884.72 34.71 235.91 0.00 14001.56 662.81 19065.02 00:17:32.070 Received shutdown signal, test time was about 15.000000 seconds 00:17:32.070 00:17:32.070 Latency(us) 00:17:32.070 [2024-11-19T09:45:19.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.070 [2024-11-19T09:45:19.693Z] =================================================================================================================== 00:17:32.070 [2024-11-19T09:45:19.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.070 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:32.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.070 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:32.070 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:32.070 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75597 00:17:32.070 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75597 /var/tmp/bdevperf.sock 00:17:32.070 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75597 ']' 00:17:32.070 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.070 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.070 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:32.070 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.070 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.070 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:32.070 09:45:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.070 09:45:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:32.070 09:45:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:32.070 [2024-11-19 09:45:19.554271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:32.070 09:45:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:17:32.328 [2024-11-19 09:45:19.846711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:17:32.328 09:45:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:32.594 NVMe0n1 00:17:32.594 09:45:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:33.201 00:17:33.201 09:45:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:33.459 00:17:33.459 09:45:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:33.459 09:45:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:33.717 09:45:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:33.975 09:45:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:37.264 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:37.264 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:37.264 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75672 00:17:37.264 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:37.264 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75672 00:17:38.641 { 00:17:38.641 "results": [ 00:17:38.641 { 00:17:38.641 "job": "NVMe0n1", 00:17:38.641 "core_mask": "0x1", 00:17:38.641 "workload": "verify", 00:17:38.641 "status": "finished", 00:17:38.641 "verify_range": { 00:17:38.641 "start": 0, 00:17:38.641 "length": 16384 00:17:38.641 }, 00:17:38.641 "queue_depth": 128, 00:17:38.641 "io_size": 4096, 00:17:38.641 "runtime": 1.020353, 00:17:38.641 "iops": 6543.813758571789, 00:17:38.641 "mibps": 25.56177249442105, 00:17:38.641 "io_failed": 0, 00:17:38.641 "io_timeout": 0, 00:17:38.641 "avg_latency_us": 19479.55923591161, 00:17:38.641 "min_latency_us": 2368.232727272727, 00:17:38.641 "max_latency_us": 18350.08 00:17:38.641 } 00:17:38.641 ], 00:17:38.641 "core_count": 1 00:17:38.641 } 00:17:38.641 09:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:38.641 [2024-11-19 09:45:18.923464] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:17:38.641 [2024-11-19 09:45:18.923601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75597 ] 00:17:38.641 [2024-11-19 09:45:19.074102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.641 [2024-11-19 09:45:19.139774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.641 [2024-11-19 09:45:19.196230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:38.641 [2024-11-19 09:45:21.381878] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:17:38.641 [2024-11-19 09:45:21.382041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.641 [2024-11-19 09:45:21.382068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.641 [2024-11-19 09:45:21.382087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.641 [2024-11-19 09:45:21.382101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.641 [2024-11-19 09:45:21.382116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.641 [2024-11-19 09:45:21.382129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.641 [2024-11-19 09:45:21.382144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.641 [2024-11-19 09:45:21.382157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.641 [2024-11-19 09:45:21.382171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:17:38.641 [2024-11-19 09:45:21.382237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:17:38.641 [2024-11-19 09:45:21.382271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d6710 (9): Bad file descriptor 00:17:38.641 [2024-11-19 09:45:21.392894] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:17:38.641 Running I/O for 1 seconds... 00:17:38.641 6549.00 IOPS, 25.58 MiB/s 00:17:38.641 Latency(us) 00:17:38.641 [2024-11-19T09:45:26.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.641 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:38.641 Verification LBA range: start 0x0 length 0x4000 00:17:38.641 NVMe0n1 : 1.02 6543.81 25.56 0.00 0.00 19479.56 2368.23 18350.08 00:17:38.641 [2024-11-19T09:45:26.264Z] =================================================================================================================== 00:17:38.641 [2024-11-19T09:45:26.264Z] Total : 6543.81 25.56 0.00 0.00 19479.56 2368.23 18350.08 00:17:38.641 09:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:38.641 09:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:38.641 09:45:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:39.208 09:45:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:39.208 09:45:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:39.208 09:45:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:39.775 09:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:43.058 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:43.058 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:43.058 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75597 00:17:43.058 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75597 ']' 00:17:43.058 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75597 00:17:43.058 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:43.058 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.058 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75597 00:17:43.058 killing process with pid 75597 00:17:43.058 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.058 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.059 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75597' 00:17:43.059 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75597 00:17:43.059 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75597 00:17:43.059 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:43.059 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.625 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:43.626 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:43.626 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:43.626 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:43.626 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:17:43.626 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:43.626 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:17:43.626 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:43.626 09:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:43.626 rmmod nvme_tcp 00:17:43.626 rmmod nvme_fabrics 00:17:43.626 rmmod nvme_keyring 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75352 ']' 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75352 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75352 ']' 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75352 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75352 00:17:43.626 killing process with pid 75352 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75352' 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75352 00:17:43.626 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75352 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:43.885 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:17:44.145 00:17:44.145 real 0m32.330s 00:17:44.145 user 2m4.984s 00:17:44.145 sys 0m5.406s 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:44.145 ************************************ 00:17:44.145 END TEST nvmf_failover 00:17:44.145 ************************************ 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.145 ************************************ 00:17:44.145 START TEST nvmf_host_discovery 00:17:44.145 ************************************ 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:44.145 * Looking for test storage... 00:17:44.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:17:44.145 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:44.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.406 --rc genhtml_branch_coverage=1 00:17:44.406 --rc genhtml_function_coverage=1 00:17:44.406 --rc genhtml_legend=1 00:17:44.406 --rc geninfo_all_blocks=1 00:17:44.406 --rc geninfo_unexecuted_blocks=1 00:17:44.406 00:17:44.406 ' 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:44.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.406 --rc genhtml_branch_coverage=1 00:17:44.406 --rc genhtml_function_coverage=1 00:17:44.406 --rc genhtml_legend=1 00:17:44.406 --rc geninfo_all_blocks=1 00:17:44.406 --rc geninfo_unexecuted_blocks=1 00:17:44.406 00:17:44.406 ' 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:44.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.406 --rc genhtml_branch_coverage=1 00:17:44.406 --rc genhtml_function_coverage=1 00:17:44.406 --rc genhtml_legend=1 00:17:44.406 --rc geninfo_all_blocks=1 00:17:44.406 --rc geninfo_unexecuted_blocks=1 00:17:44.406 00:17:44.406 ' 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:44.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.406 --rc genhtml_branch_coverage=1 00:17:44.406 --rc genhtml_function_coverage=1 00:17:44.406 --rc genhtml_legend=1 00:17:44.406 --rc geninfo_all_blocks=1 00:17:44.406 --rc geninfo_unexecuted_blocks=1 00:17:44.406 00:17:44.406 ' 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:44.406 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:44.407 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:44.407 Cannot find device "nvmf_init_br" 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:44.407 Cannot find device "nvmf_init_br2" 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:44.407 Cannot find device "nvmf_tgt_br" 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:44.407 Cannot find device "nvmf_tgt_br2" 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:44.407 Cannot find device "nvmf_init_br" 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:44.407 Cannot find device "nvmf_init_br2" 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:44.407 Cannot find device "nvmf_tgt_br" 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:44.407 Cannot find device "nvmf_tgt_br2" 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:44.407 Cannot find device "nvmf_br" 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:44.407 Cannot find device "nvmf_init_if" 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:44.407 Cannot find device "nvmf_init_if2" 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:17:44.407 09:45:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.407 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:17:44.407 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:44.407 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:44.407 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:44.407 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:44.407 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:44.666 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:44.667 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:44.667 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:17:44.667 00:17:44.667 --- 10.0.0.3 ping statistics --- 00:17:44.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.667 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:44.667 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:44.667 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:17:44.667 00:17:44.667 --- 10.0.0.4 ping statistics --- 00:17:44.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.667 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:44.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:44.667 00:17:44.667 --- 10.0.0.1 ping statistics --- 00:17:44.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.667 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:44.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:17:44.667 00:17:44.667 --- 10.0.0.2 ping statistics --- 00:17:44.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.667 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75997 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75997 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75997 ']' 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.667 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.926 [2024-11-19 09:45:32.311889] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:17:44.926 [2024-11-19 09:45:32.311994] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.926 [2024-11-19 09:45:32.470377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.926 [2024-11-19 09:45:32.535493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.926 [2024-11-19 09:45:32.535548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.926 [2024-11-19 09:45:32.535562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.926 [2024-11-19 09:45:32.535572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.926 [2024-11-19 09:45:32.535581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.926 [2024-11-19 09:45:32.536063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.184 [2024-11-19 09:45:32.594755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.185 [2024-11-19 09:45:32.709434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.185 [2024-11-19 09:45:32.717567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.185 null0 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.185 null1 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76022 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76022 /tmp/host.sock 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76022 ']' 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.185 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.185 09:45:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.443 [2024-11-19 09:45:32.807787] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:17:45.443 [2024-11-19 09:45:32.807880] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76022 ] 00:17:45.443 [2024-11-19 09:45:32.959126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.443 [2024-11-19 09:45:33.026148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.702 [2024-11-19 09:45:33.085707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.269 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:46.527 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.527 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:46.527 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:46.527 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.527 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.527 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.527 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:46.527 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:46.527 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.527 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.527 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:46.527 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:46.527 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:46.527 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.527 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:46.527 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:46.527 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:46.527 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:46.527 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.527 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.527 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:46.527 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:46.527 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.527 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:46.527 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:46.527 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.527 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:46.528 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.786 [2024-11-19 09:45:34.185981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:46.786 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:46.787 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:46.787 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.787 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:46.787 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:46.787 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.787 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.044 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:17:47.044 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:17:47.302 [2024-11-19 09:45:34.832751] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:47.302 [2024-11-19 09:45:34.832809] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:47.302 [2024-11-19 09:45:34.832848] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:47.302 [2024-11-19 09:45:34.838806] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:47.302 [2024-11-19 09:45:34.893191] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:17:47.302 [2024-11-19 09:45:34.894338] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x158de60:1 started. 00:17:47.302 [2024-11-19 09:45:34.896348] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:47.302 [2024-11-19 09:45:34.896374] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:47.302 [2024-11-19 09:45:34.901136] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x158de60 was disconnected and freed. delete nvme_qpair. 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:47.870 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:48.129 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.130 [2024-11-19 09:45:35.654983] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x159c000:1 started. 00:17:48.130 [2024-11-19 09:45:35.661652] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x159c000 was disconnected and freed. delete nvme_qpair. 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.130 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.389 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:48.389 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:48.389 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:48.389 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.389 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:17:48.389 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.389 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.389 [2024-11-19 09:45:35.763806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:48.389 [2024-11-19 09:45:35.764742] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:48.389 [2024-11-19 09:45:35.764798] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:48.390 [2024-11-19 09:45:35.770728] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:48.390 [2024-11-19 09:45:35.836474] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:17:48.390 [2024-11-19 09:45:35.836538] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:48.390 [2024-11-19 09:45:35.836552] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:48.390 [2024-11-19 09:45:35.836558] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.390 [2024-11-19 09:45:35.992567] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:48.390 [2024-11-19 09:45:35.992668] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:48.390 [2024-11-19 09:45:35.993591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.390 [2024-11-19 09:45:35.993660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.390 [2024-11-19 09:45:35.993691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.390 [2024-11-19 09:45:35.993700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.390 [2024-11-19 09:45:35.993710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.390 [2024-11-19 09:45:35.993719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.390 [2024-11-19 09:45:35.993730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.390 [2024-11-19 09:45:35.993739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.390 [2024-11-19 09:45:35.993748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156a230 is same with the state(6) to be set 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:48.390 [2024-11-19 09:45:35.998557] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:17:48.390 [2024-11-19 09:45:35.998592] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:48.390 [2024-11-19 09:45:35.998668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156a230 (9): Bad file descriptor 00:17:48.390 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:48.390 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:48.391 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.391 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.391 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:48.391 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:48.391 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.650 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:48.651 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:48.651 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:48.651 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.651 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.651 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:48.651 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:48.651 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:48.651 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.909 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:49.845 [2024-11-19 09:45:37.413953] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:49.845 [2024-11-19 09:45:37.413993] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:49.845 [2024-11-19 09:45:37.414038] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:49.845 [2024-11-19 09:45:37.420008] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:17:50.104 [2024-11-19 09:45:37.478483] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:17:50.104 [2024-11-19 09:45:37.479327] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x15594a0:1 started. 00:17:50.104 [2024-11-19 09:45:37.481549] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:50.104 [2024-11-19 09:45:37.481596] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:50.104 [2024-11-19 09:45:37.482998] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x15594a0 was disconnected and freed. delete nvme_qpair. 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:50.104 request: 00:17:50.104 { 00:17:50.104 "name": "nvme", 00:17:50.104 "trtype": "tcp", 00:17:50.104 "traddr": "10.0.0.3", 00:17:50.104 "adrfam": "ipv4", 00:17:50.104 "trsvcid": "8009", 00:17:50.104 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:50.104 "wait_for_attach": true, 00:17:50.104 "method": "bdev_nvme_start_discovery", 00:17:50.104 "req_id": 1 00:17:50.104 } 00:17:50.104 Got JSON-RPC error response 00:17:50.104 response: 00:17:50.104 { 00:17:50.104 "code": -17, 00:17:50.104 "message": "File exists" 00:17:50.104 } 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:50.104 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:50.105 request: 00:17:50.105 { 00:17:50.105 "name": "nvme_second", 00:17:50.105 "trtype": "tcp", 00:17:50.105 "traddr": "10.0.0.3", 00:17:50.105 "adrfam": "ipv4", 00:17:50.105 "trsvcid": "8009", 00:17:50.105 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:50.105 "wait_for_attach": true, 00:17:50.105 "method": "bdev_nvme_start_discovery", 00:17:50.105 "req_id": 1 00:17:50.105 } 00:17:50.105 Got JSON-RPC error response 00:17:50.105 response: 00:17:50.105 { 00:17:50.105 "code": -17, 00:17:50.105 "message": "File exists" 00:17:50.105 } 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:50.105 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.363 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:50.363 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:50.363 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:50.363 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:50.363 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:50.363 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.363 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:50.363 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.363 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:50.363 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.363 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:51.298 [2024-11-19 09:45:38.737931] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:51.298 [2024-11-19 09:45:38.738062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559310 with addr=10.0.0.3, port=8010 00:17:51.298 [2024-11-19 09:45:38.738104] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:51.298 [2024-11-19 09:45:38.738114] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:51.298 [2024-11-19 09:45:38.738124] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:52.234 [2024-11-19 09:45:39.737939] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:52.234 [2024-11-19 09:45:39.738039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559310 with addr=10.0.0.3, port=8010 00:17:52.234 [2024-11-19 09:45:39.738064] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:52.234 [2024-11-19 09:45:39.738075] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:52.234 [2024-11-19 09:45:39.738084] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:53.169 [2024-11-19 09:45:40.737803] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:17:53.169 request: 00:17:53.169 { 00:17:53.169 "name": "nvme_second", 00:17:53.169 "trtype": "tcp", 00:17:53.169 "traddr": "10.0.0.3", 00:17:53.169 "adrfam": "ipv4", 00:17:53.169 "trsvcid": "8010", 00:17:53.169 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:53.169 "wait_for_attach": false, 00:17:53.169 "attach_timeout_ms": 3000, 00:17:53.169 "method": "bdev_nvme_start_discovery", 00:17:53.169 "req_id": 1 00:17:53.169 } 00:17:53.169 Got JSON-RPC error response 00:17:53.169 response: 00:17:53.169 { 00:17:53.169 "code": -110, 00:17:53.169 "message": "Connection timed out" 00:17:53.170 } 00:17:53.170 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:53.170 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:53.170 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:53.170 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:53.170 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:53.170 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:53.170 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:53.170 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.170 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:53.170 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:53.170 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:53.170 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:53.170 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.428 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76022 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:53.429 rmmod nvme_tcp 00:17:53.429 rmmod nvme_fabrics 00:17:53.429 rmmod nvme_keyring 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75997 ']' 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75997 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75997 ']' 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75997 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75997 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:53.429 killing process with pid 75997 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75997' 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75997 00:17:53.429 09:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75997 00:17:53.711 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:53.711 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:53.711 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:53.711 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:53.712 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:17:53.996 00:17:53.996 real 0m9.758s 00:17:53.996 user 0m18.678s 00:17:53.996 sys 0m2.035s 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:53.996 ************************************ 00:17:53.996 END TEST nvmf_host_discovery 00:17:53.996 ************************************ 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.996 ************************************ 00:17:53.996 START TEST nvmf_host_multipath_status 00:17:53.996 ************************************ 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:53.996 * Looking for test storage... 00:17:53.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:17:53.996 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:54.255 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:54.255 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.255 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:54.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.256 --rc genhtml_branch_coverage=1 00:17:54.256 --rc genhtml_function_coverage=1 00:17:54.256 --rc genhtml_legend=1 00:17:54.256 --rc geninfo_all_blocks=1 00:17:54.256 --rc geninfo_unexecuted_blocks=1 00:17:54.256 00:17:54.256 ' 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:54.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.256 --rc genhtml_branch_coverage=1 00:17:54.256 --rc genhtml_function_coverage=1 00:17:54.256 --rc genhtml_legend=1 00:17:54.256 --rc geninfo_all_blocks=1 00:17:54.256 --rc geninfo_unexecuted_blocks=1 00:17:54.256 00:17:54.256 ' 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:54.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.256 --rc genhtml_branch_coverage=1 00:17:54.256 --rc genhtml_function_coverage=1 00:17:54.256 --rc genhtml_legend=1 00:17:54.256 --rc geninfo_all_blocks=1 00:17:54.256 --rc geninfo_unexecuted_blocks=1 00:17:54.256 00:17:54.256 ' 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:54.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.256 --rc genhtml_branch_coverage=1 00:17:54.256 --rc genhtml_function_coverage=1 00:17:54.256 --rc genhtml_legend=1 00:17:54.256 --rc geninfo_all_blocks=1 00:17:54.256 --rc geninfo_unexecuted_blocks=1 00:17:54.256 00:17:54.256 ' 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:54.256 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:54.256 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:54.257 Cannot find device "nvmf_init_br" 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:54.257 Cannot find device "nvmf_init_br2" 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:54.257 Cannot find device "nvmf_tgt_br" 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:54.257 Cannot find device "nvmf_tgt_br2" 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:54.257 Cannot find device "nvmf_init_br" 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:54.257 Cannot find device "nvmf_init_br2" 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:54.257 Cannot find device "nvmf_tgt_br" 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:54.257 Cannot find device "nvmf_tgt_br2" 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:54.257 Cannot find device "nvmf_br" 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:54.257 Cannot find device "nvmf_init_if" 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:54.257 Cannot find device "nvmf_init_if2" 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:54.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:54.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:54.257 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:54.517 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:54.518 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:54.518 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:54.518 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:17:54.518 00:17:54.518 --- 10.0.0.3 ping statistics --- 00:17:54.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.518 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:54.518 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:54.518 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:17:54.518 00:17:54.518 --- 10.0.0.4 ping statistics --- 00:17:54.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.518 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:54.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:54.518 00:17:54.518 --- 10.0.0.1 ping statistics --- 00:17:54.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.518 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:54.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:17:54.518 00:17:54.518 --- 10.0.0.2 ping statistics --- 00:17:54.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.518 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76523 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76523 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76523 ']' 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.518 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:54.518 [2024-11-19 09:45:42.131200] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:17:54.518 [2024-11-19 09:45:42.131312] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.778 [2024-11-19 09:45:42.285670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:54.778 [2024-11-19 09:45:42.355272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.778 [2024-11-19 09:45:42.355327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.778 [2024-11-19 09:45:42.355343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.778 [2024-11-19 09:45:42.355354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.778 [2024-11-19 09:45:42.355363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.778 [2024-11-19 09:45:42.356595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.778 [2024-11-19 09:45:42.356608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.037 [2024-11-19 09:45:42.416041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:55.037 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.037 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:17:55.037 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:55.037 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:55.037 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:55.037 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.037 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76523 00:17:55.037 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:55.295 [2024-11-19 09:45:42.825057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.295 09:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:55.554 Malloc0 00:17:55.813 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:56.071 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:56.329 09:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:56.586 [2024-11-19 09:45:43.984863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:56.586 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:56.845 [2024-11-19 09:45:44.245035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:56.845 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76572 00:17:56.845 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:56.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.845 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.845 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76572 /var/tmp/bdevperf.sock 00:17:56.845 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76572 ']' 00:17:56.845 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.845 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.845 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.845 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.845 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:57.103 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.103 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:17:57.103 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:57.361 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:57.929 Nvme0n1 00:17:57.929 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:58.188 Nvme0n1 00:17:58.188 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:58.188 09:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:00.109 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:00.109 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:00.366 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:00.624 09:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:02.006 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:02.006 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:02.006 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.006 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:02.006 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:02.006 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:02.006 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.006 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:02.264 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:02.264 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:02.264 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.264 09:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:02.522 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:02.522 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:02.522 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.522 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:02.781 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:02.781 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:02.781 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.781 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:03.039 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.039 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:03.039 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.039 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:03.298 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.298 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:03.298 09:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:03.556 09:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:03.814 09:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:04.811 09:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:04.811 09:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:04.811 09:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.811 09:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:05.075 09:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:05.075 09:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:05.075 09:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:05.075 09:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:05.642 09:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:05.642 09:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:05.642 09:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:05.642 09:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:05.901 09:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:05.901 09:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:05.901 09:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:05.901 09:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:06.160 09:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.160 09:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:06.160 09:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.160 09:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:06.418 09:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.418 09:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:06.418 09:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:06.418 09:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.677 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.677 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:06.677 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:06.936 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:18:07.195 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:08.130 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:08.130 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:08.130 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:08.130 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:08.695 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:08.695 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:08.695 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:08.695 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:08.953 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:08.953 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:08.953 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:08.953 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:09.211 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.211 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:09.212 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.212 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:09.470 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.471 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:09.471 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.471 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:09.730 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.730 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:09.730 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.730 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:09.988 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.988 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:09.988 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:10.246 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:10.505 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:11.906 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:11.906 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:11.906 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:11.906 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:11.906 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:11.906 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:11.906 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:11.906 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.165 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:12.165 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:12.165 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.165 09:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:12.421 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.421 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:12.421 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.421 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:12.988 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.988 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:12.988 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.988 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:13.246 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:13.246 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:13.246 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:13.246 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:13.504 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:13.504 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:13.504 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:13.761 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:14.019 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:14.952 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:14.952 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:14.952 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:14.952 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:15.211 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:15.211 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:15.211 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.211 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:15.483 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:15.483 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:15.483 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.483 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:16.075 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:16.075 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:16.075 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:16.075 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.075 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:16.075 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:16.075 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.075 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:16.640 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:16.640 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:16.640 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.640 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:16.640 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:16.640 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:16.640 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:17.207 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:17.465 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:18.400 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:18.400 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:18.400 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.400 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:18.658 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:18.658 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:18.658 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.658 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:18.916 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:18.916 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:18.916 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.916 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:19.174 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.174 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:19.174 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.174 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:19.432 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.432 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:19.432 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.432 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:19.999 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:19.999 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:19.999 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.999 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:19.999 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.999 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:20.565 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:20.565 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:20.565 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:20.823 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:22.219 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:22.219 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:22.219 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.219 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:22.219 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.219 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:22.219 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.219 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:22.477 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.477 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:22.477 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.477 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:22.734 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.734 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:22.734 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.734 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:22.992 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.992 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:22.992 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.992 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:23.559 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:23.559 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:23.559 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.559 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:23.559 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:23.559 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:23.559 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:24.126 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:24.126 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:25.500 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:25.500 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:25.500 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.501 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:25.501 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:25.501 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:25.501 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.501 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:26.067 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.067 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:26.067 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.067 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:26.067 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.067 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:26.067 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.067 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:26.635 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.635 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:26.635 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.635 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:26.635 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.635 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:26.635 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.635 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:27.202 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:27.202 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:27.202 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:27.461 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:18:27.719 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:28.655 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:28.655 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:28.655 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.655 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:28.914 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:28.914 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:28.914 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:28.914 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.173 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.173 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:29.173 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.173 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:29.432 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.432 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:29.432 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.432 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:29.691 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.691 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:29.691 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.691 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:29.949 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.949 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:29.949 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.949 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:30.208 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:30.208 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:30.208 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:30.467 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:31.035 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:31.970 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:31.970 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:31.970 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:31.970 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:32.241 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:32.241 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:32.241 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:32.241 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:32.509 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:32.509 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:32.509 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:32.509 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:32.767 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:32.767 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:32.767 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:32.767 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:33.025 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:33.025 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:33.025 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.026 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:33.284 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:33.284 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:33.284 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.284 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:33.543 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:33.543 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76572 00:18:33.543 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76572 ']' 00:18:33.543 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76572 00:18:33.543 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:18:33.543 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.543 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76572 00:18:33.806 killing process with pid 76572 00:18:33.806 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:33.806 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:33.806 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76572' 00:18:33.806 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76572 00:18:33.806 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76572 00:18:33.806 { 00:18:33.806 "results": [ 00:18:33.806 { 00:18:33.806 "job": "Nvme0n1", 00:18:33.806 "core_mask": "0x4", 00:18:33.806 "workload": "verify", 00:18:33.806 "status": "terminated", 00:18:33.806 "verify_range": { 00:18:33.806 "start": 0, 00:18:33.806 "length": 16384 00:18:33.806 }, 00:18:33.806 "queue_depth": 128, 00:18:33.806 "io_size": 4096, 00:18:33.806 "runtime": 35.476408, 00:18:33.806 "iops": 8738.229642640259, 00:18:33.806 "mibps": 34.13370954156351, 00:18:33.806 "io_failed": 0, 00:18:33.806 "io_timeout": 0, 00:18:33.806 "avg_latency_us": 14616.030112465913, 00:18:33.806 "min_latency_us": 603.2290909090909, 00:18:33.806 "max_latency_us": 4026531.84 00:18:33.806 } 00:18:33.806 ], 00:18:33.806 "core_count": 1 00:18:33.806 } 00:18:33.806 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76572 00:18:33.806 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:33.806 [2024-11-19 09:45:44.315174] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:18:33.806 [2024-11-19 09:45:44.315279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76572 ] 00:18:33.806 [2024-11-19 09:45:44.463962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.806 [2024-11-19 09:45:44.534903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.806 [2024-11-19 09:45:44.594076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:33.806 Running I/O for 90 seconds... 00:18:33.806 6933.00 IOPS, 27.08 MiB/s [2024-11-19T09:46:21.429Z] 8052.00 IOPS, 31.45 MiB/s [2024-11-19T09:46:21.429Z] 8517.33 IOPS, 33.27 MiB/s [2024-11-19T09:46:21.429Z] 8730.00 IOPS, 34.10 MiB/s [2024-11-19T09:46:21.429Z] 8870.40 IOPS, 34.65 MiB/s [2024-11-19T09:46:21.429Z] 8928.17 IOPS, 34.88 MiB/s [2024-11-19T09:46:21.429Z] 8976.43 IOPS, 35.06 MiB/s [2024-11-19T09:46:21.429Z] 9009.38 IOPS, 35.19 MiB/s [2024-11-19T09:46:21.429Z] 9011.56 IOPS, 35.20 MiB/s [2024-11-19T09:46:21.429Z] 9051.60 IOPS, 35.36 MiB/s [2024-11-19T09:46:21.429Z] 9072.64 IOPS, 35.44 MiB/s [2024-11-19T09:46:21.429Z] 9096.92 IOPS, 35.53 MiB/s [2024-11-19T09:46:21.429Z] 9118.38 IOPS, 35.62 MiB/s [2024-11-19T09:46:21.429Z] 9144.50 IOPS, 35.72 MiB/s [2024-11-19T09:46:21.429Z] 9161.00 IOPS, 35.79 MiB/s [2024-11-19T09:46:21.429Z] [2024-11-19 09:46:01.165785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.165852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.165907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.165927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.165947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.165961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.165981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.165994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.166014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.166027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.166052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.166065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.166084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.166097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.166116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.166129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.166149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.166162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.166251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.166268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.166288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.166302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.166322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.166336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.166355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.166471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.166496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.166513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.166539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.166553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.166572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.166586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.806 [2024-11-19 09:46:01.166606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.806 [2024-11-19 09:46:01.166620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.166654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.807 [2024-11-19 09:46:01.166668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.166687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.807 [2024-11-19 09:46:01.166701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.166720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.807 [2024-11-19 09:46:01.166733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.166753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.166766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.166797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.166812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.166831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.166845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.166897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.166927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.166948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.166963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.166984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.166999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.807 [2024-11-19 09:46:01.167763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.807 [2024-11-19 09:46:01.167799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.807 [2024-11-19 09:46:01.167856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.807 [2024-11-19 09:46:01.167893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.807 [2024-11-19 09:46:01.167945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.167998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.807 [2024-11-19 09:46:01.168018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.168040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.807 [2024-11-19 09:46:01.168057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.168079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.807 [2024-11-19 09:46:01.168094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.168115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.807 [2024-11-19 09:46:01.168130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.168151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.807 [2024-11-19 09:46:01.168167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.168188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.807 [2024-11-19 09:46:01.168202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.168223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.807 [2024-11-19 09:46:01.168238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:33.807 [2024-11-19 09:46:01.168260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.168319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.168392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.168436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.168471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.168536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.168571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.168606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.168641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.168676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.168711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.168763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.168799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.168834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.168870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.168906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.168950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.168972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.168986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.169022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.169057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.169093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.169157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.169192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.169243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.169306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.169356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.169393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.169430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.169475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.169512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.169547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.169583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.169618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.169654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.169690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.169725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.169791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.808 [2024-11-19 09:46:01.169856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.169889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:33.808 [2024-11-19 09:46:01.169909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.808 [2024-11-19 09:46:01.169923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.169949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.169969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.169990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.170021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.170061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.170095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.170143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.170177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.170244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.170279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.170314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.170365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.170918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.170932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.171623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.809 [2024-11-19 09:46:01.171664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.171697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.171713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.171740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.171754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.171780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.171794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.171821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.171835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.171861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.171875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.171902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.171916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.171949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.171964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:01.172005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:01.172023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:33.809 8841.94 IOPS, 34.54 MiB/s [2024-11-19T09:46:21.432Z] 8321.82 IOPS, 32.51 MiB/s [2024-11-19T09:46:21.432Z] 7859.50 IOPS, 30.70 MiB/s [2024-11-19T09:46:21.432Z] 7445.84 IOPS, 29.09 MiB/s [2024-11-19T09:46:21.432Z] 7331.20 IOPS, 28.64 MiB/s [2024-11-19T09:46:21.432Z] 7424.57 IOPS, 29.00 MiB/s [2024-11-19T09:46:21.432Z] 7506.55 IOPS, 29.32 MiB/s [2024-11-19T09:46:21.432Z] 7664.65 IOPS, 29.94 MiB/s [2024-11-19T09:46:21.432Z] 7866.88 IOPS, 30.73 MiB/s [2024-11-19T09:46:21.432Z] 8054.16 IOPS, 31.46 MiB/s [2024-11-19T09:46:21.432Z] 8207.77 IOPS, 32.06 MiB/s [2024-11-19T09:46:21.432Z] 8231.19 IOPS, 32.15 MiB/s [2024-11-19T09:46:21.432Z] 8252.64 IOPS, 32.24 MiB/s [2024-11-19T09:46:21.432Z] 8275.93 IOPS, 32.33 MiB/s [2024-11-19T09:46:21.432Z] 8360.47 IOPS, 32.66 MiB/s [2024-11-19T09:46:21.432Z] 8498.06 IOPS, 33.20 MiB/s [2024-11-19T09:46:21.432Z] 8629.00 IOPS, 33.71 MiB/s [2024-11-19T09:46:21.432Z] [2024-11-19 09:46:18.335809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:18.335913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:33.809 [2024-11-19 09:46:18.335975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.809 [2024-11-19 09:46:18.336045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.810 [2024-11-19 09:46:18.336087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.810 [2024-11-19 09:46:18.336395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.810 [2024-11-19 09:46:18.336429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.810 [2024-11-19 09:46:18.336463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.810 [2024-11-19 09:46:18.336497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.810 [2024-11-19 09:46:18.336694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.810 [2024-11-19 09:46:18.336728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.810 [2024-11-19 09:46:18.336761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.810 [2024-11-19 09:46:18.336794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.336963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.336990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.810 [2024-11-19 09:46:18.337004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.337024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.810 [2024-11-19 09:46:18.337038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.337058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.810 [2024-11-19 09:46:18.337071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.337091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.810 [2024-11-19 09:46:18.337104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.337124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.810 [2024-11-19 09:46:18.337138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.337157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.337171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.337190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.337204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.337224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.337238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:33.810 [2024-11-19 09:46:18.337268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.810 [2024-11-19 09:46:18.337284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.337336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.337394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.337429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.337475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.337509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.337544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.337578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.337612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.337647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.337681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.337715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.337750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.337785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.337819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.337854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.337894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.337929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.337950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.337964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.339476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.339506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.339533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.339550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.339571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.339586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.339622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.339637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.339657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.339687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.339708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.339722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.339743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.339757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.339778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.339793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.339813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.339829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.339850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.339877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.339900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.811 [2024-11-19 09:46:18.339915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.339936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.339950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.339971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.339986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.340007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.340021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:33.811 [2024-11-19 09:46:18.340043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.811 [2024-11-19 09:46:18.340057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:33.811 8710.82 IOPS, 34.03 MiB/s [2024-11-19T09:46:21.434Z] 8723.79 IOPS, 34.08 MiB/s [2024-11-19T09:46:21.434Z] 8733.51 IOPS, 34.12 MiB/s [2024-11-19T09:46:21.434Z] Received shutdown signal, test time was about 35.477221 seconds 00:18:33.811 00:18:33.811 Latency(us) 00:18:33.811 [2024-11-19T09:46:21.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.811 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:33.811 Verification LBA range: start 0x0 length 0x4000 00:18:33.811 Nvme0n1 : 35.48 8738.23 34.13 0.00 0.00 14616.03 603.23 4026531.84 00:18:33.811 [2024-11-19T09:46:21.434Z] =================================================================================================================== 00:18:33.811 [2024-11-19T09:46:21.434Z] Total : 8738.23 34.13 0.00 0.00 14616.03 603.23 4026531.84 00:18:33.811 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:34.070 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:34.070 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:34.070 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:34.070 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:34.070 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:18:34.329 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:34.329 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:18:34.329 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:34.329 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:34.329 rmmod nvme_tcp 00:18:34.329 rmmod nvme_fabrics 00:18:34.329 rmmod nvme_keyring 00:18:34.329 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:34.329 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:18:34.329 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:18:34.329 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76523 ']' 00:18:34.330 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76523 00:18:34.330 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76523 ']' 00:18:34.330 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76523 00:18:34.330 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:18:34.330 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.330 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76523 00:18:34.330 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.330 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.330 killing process with pid 76523 00:18:34.330 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76523' 00:18:34.330 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76523 00:18:34.330 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76523 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.589 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:18:34.848 00:18:34.848 real 0m40.773s 00:18:34.848 user 2m12.799s 00:18:34.848 sys 0m12.041s 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:34.848 ************************************ 00:18:34.848 END TEST nvmf_host_multipath_status 00:18:34.848 ************************************ 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.848 ************************************ 00:18:34.848 START TEST nvmf_discovery_remove_ifc 00:18:34.848 ************************************ 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:34.848 * Looking for test storage... 00:18:34.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:18:34.848 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:34.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.849 --rc genhtml_branch_coverage=1 00:18:34.849 --rc genhtml_function_coverage=1 00:18:34.849 --rc genhtml_legend=1 00:18:34.849 --rc geninfo_all_blocks=1 00:18:34.849 --rc geninfo_unexecuted_blocks=1 00:18:34.849 00:18:34.849 ' 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:34.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.849 --rc genhtml_branch_coverage=1 00:18:34.849 --rc genhtml_function_coverage=1 00:18:34.849 --rc genhtml_legend=1 00:18:34.849 --rc geninfo_all_blocks=1 00:18:34.849 --rc geninfo_unexecuted_blocks=1 00:18:34.849 00:18:34.849 ' 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:34.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.849 --rc genhtml_branch_coverage=1 00:18:34.849 --rc genhtml_function_coverage=1 00:18:34.849 --rc genhtml_legend=1 00:18:34.849 --rc geninfo_all_blocks=1 00:18:34.849 --rc geninfo_unexecuted_blocks=1 00:18:34.849 00:18:34.849 ' 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:34.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.849 --rc genhtml_branch_coverage=1 00:18:34.849 --rc genhtml_function_coverage=1 00:18:34.849 --rc genhtml_legend=1 00:18:34.849 --rc geninfo_all_blocks=1 00:18:34.849 --rc geninfo_unexecuted_blocks=1 00:18:34.849 00:18:34.849 ' 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.849 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.849 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:35.108 Cannot find device "nvmf_init_br" 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:35.108 Cannot find device "nvmf_init_br2" 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:35.108 Cannot find device "nvmf_tgt_br" 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:35.108 Cannot find device "nvmf_tgt_br2" 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:35.108 Cannot find device "nvmf_init_br" 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:35.108 Cannot find device "nvmf_init_br2" 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:35.108 Cannot find device "nvmf_tgt_br" 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:35.108 Cannot find device "nvmf_tgt_br2" 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:35.108 Cannot find device "nvmf_br" 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:35.108 Cannot find device "nvmf_init_if" 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:35.108 Cannot find device "nvmf_init_if2" 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.108 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.108 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:35.108 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:35.109 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:35.109 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:35.109 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:35.109 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:35.109 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:35.109 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:35.109 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:35.109 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:35.109 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:35.109 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:35.109 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:35.109 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:35.109 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:35.368 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:35.368 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:18:35.368 00:18:35.368 --- 10.0.0.3 ping statistics --- 00:18:35.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.368 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:35.368 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:35.368 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:18:35.368 00:18:35.368 --- 10.0.0.4 ping statistics --- 00:18:35.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.368 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:35.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:18:35.368 00:18:35.368 --- 10.0.0.1 ping statistics --- 00:18:35.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.368 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:35.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:18:35.368 00:18:35.368 --- 10.0.0.2 ping statistics --- 00:18:35.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.368 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77434 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77434 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77434 ']' 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.368 09:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:35.368 [2024-11-19 09:46:22.894682] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:18:35.368 [2024-11-19 09:46:22.894787] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.627 [2024-11-19 09:46:23.044359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.627 [2024-11-19 09:46:23.107563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.627 [2024-11-19 09:46:23.107652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.627 [2024-11-19 09:46:23.107678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.627 [2024-11-19 09:46:23.107687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.627 [2024-11-19 09:46:23.107695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.627 [2024-11-19 09:46:23.108101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.627 [2024-11-19 09:46:23.167389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:36.562 [2024-11-19 09:46:23.919790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.562 [2024-11-19 09:46:23.927933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:36.562 null0 00:18:36.562 [2024-11-19 09:46:23.959885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77466 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77466 /tmp/host.sock 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77466 ']' 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.562 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.562 09:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:36.562 [2024-11-19 09:46:24.045550] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:18:36.562 [2024-11-19 09:46:24.045667] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77466 ] 00:18:36.819 [2024-11-19 09:46:24.197539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.819 [2024-11-19 09:46:24.257425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.755 09:46:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.755 09:46:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:18:37.755 09:46:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:37.755 09:46:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:37.755 09:46:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.755 09:46:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:37.755 09:46:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.755 09:46:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:37.755 09:46:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.755 09:46:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:37.755 [2024-11-19 09:46:25.183013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:37.755 09:46:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.755 09:46:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:37.755 09:46:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.755 09:46:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:38.741 [2024-11-19 09:46:26.243058] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:38.741 [2024-11-19 09:46:26.243103] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:38.741 [2024-11-19 09:46:26.243152] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:38.741 [2024-11-19 09:46:26.249106] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:38.741 [2024-11-19 09:46:26.303523] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:38.741 [2024-11-19 09:46:26.304682] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x726fc0:1 started. 00:18:38.741 [2024-11-19 09:46:26.306512] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:38.741 [2024-11-19 09:46:26.306576] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:38.741 [2024-11-19 09:46:26.306604] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:38.741 [2024-11-19 09:46:26.306621] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:38.741 [2024-11-19 09:46:26.306647] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:38.741 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.741 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:38.741 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:38.741 [2024-11-19 09:46:26.311857] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x726fc0 was disconnected and freed. delete nvme_qpair. 00:18:38.741 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:38.741 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:38.741 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:38.741 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:38.741 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.741 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:38.741 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.741 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:38.741 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:18:39.000 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:39.000 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:39.000 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:39.000 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:39.000 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.000 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:39.000 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:39.000 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:39.000 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:39.000 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.000 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:39.000 09:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:39.934 09:46:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:39.934 09:46:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:39.934 09:46:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.934 09:46:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:39.934 09:46:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:39.934 09:46:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:39.934 09:46:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:39.934 09:46:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.934 09:46:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:39.934 09:46:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:41.307 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:41.307 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:41.307 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:41.307 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:41.307 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:41.307 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.307 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:41.307 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.307 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:41.307 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:42.242 09:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:42.242 09:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:42.242 09:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.242 09:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:42.242 09:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:42.242 09:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:42.242 09:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:42.242 09:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.242 09:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:42.242 09:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:43.175 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:43.175 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:43.175 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:43.175 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.176 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:43.176 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:43.176 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:43.176 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.176 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:43.176 09:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:44.111 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:44.111 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:44.111 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:44.111 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:44.111 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.111 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:44.111 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:44.111 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.370 [2024-11-19 09:46:31.734230] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:44.370 [2024-11-19 09:46:31.734292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.370 [2024-11-19 09:46:31.734308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.370 [2024-11-19 09:46:31.734322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.370 [2024-11-19 09:46:31.734332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.370 [2024-11-19 09:46:31.734342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.370 [2024-11-19 09:46:31.734351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.370 [2024-11-19 09:46:31.734361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.370 [2024-11-19 09:46:31.734370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.370 [2024-11-19 09:46:31.734380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.370 [2024-11-19 09:46:31.734389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.370 [2024-11-19 09:46:31.734399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x703240 is same with the state(6) to be set 00:18:44.370 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:44.370 09:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:44.370 [2024-11-19 09:46:31.744226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x703240 (9): Bad file descriptor 00:18:44.370 [2024-11-19 09:46:31.754260] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:44.370 [2024-11-19 09:46:31.754283] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:44.370 [2024-11-19 09:46:31.754292] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:44.370 [2024-11-19 09:46:31.754298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:44.370 [2024-11-19 09:46:31.754338] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:45.355 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:45.355 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:45.355 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.355 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:45.355 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:45.355 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:45.355 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:45.355 [2024-11-19 09:46:32.784353] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:18:45.355 [2024-11-19 09:46:32.784504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x703240 with addr=10.0.0.3, port=4420 00:18:45.355 [2024-11-19 09:46:32.784540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x703240 is same with the state(6) to be set 00:18:45.355 [2024-11-19 09:46:32.784610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x703240 (9): Bad file descriptor 00:18:45.355 [2024-11-19 09:46:32.785514] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:18:45.355 [2024-11-19 09:46:32.785602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:45.355 [2024-11-19 09:46:32.785628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:45.355 [2024-11-19 09:46:32.785650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:45.355 [2024-11-19 09:46:32.785671] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:45.355 [2024-11-19 09:46:32.785685] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:45.355 [2024-11-19 09:46:32.785696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:45.355 [2024-11-19 09:46:32.785718] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:45.355 [2024-11-19 09:46:32.785730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:45.355 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.355 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:45.355 09:46:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:46.287 [2024-11-19 09:46:33.785809] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:46.287 [2024-11-19 09:46:33.785849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:46.287 [2024-11-19 09:46:33.785877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:46.287 [2024-11-19 09:46:33.785888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:46.287 [2024-11-19 09:46:33.785899] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:18:46.287 [2024-11-19 09:46:33.785909] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:46.287 [2024-11-19 09:46:33.785916] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:46.287 [2024-11-19 09:46:33.785921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:46.287 [2024-11-19 09:46:33.785954] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:18:46.287 [2024-11-19 09:46:33.785999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.287 [2024-11-19 09:46:33.786015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.287 [2024-11-19 09:46:33.786029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.287 [2024-11-19 09:46:33.786038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.287 [2024-11-19 09:46:33.786048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.287 [2024-11-19 09:46:33.786057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.287 [2024-11-19 09:46:33.786067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.287 [2024-11-19 09:46:33.786076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.287 [2024-11-19 09:46:33.786086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.287 [2024-11-19 09:46:33.786095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.287 [2024-11-19 09:46:33.786105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:18:46.287 [2024-11-19 09:46:33.786756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ea20 (9): Bad file descriptor 00:18:46.287 [2024-11-19 09:46:33.787770] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:46.287 [2024-11-19 09:46:33.787796] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:46.287 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.557 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:46.557 09:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:47.497 09:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:47.497 09:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:47.497 09:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:47.497 09:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.497 09:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:47.497 09:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:47.497 09:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:47.497 09:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.497 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:47.497 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:48.434 [2024-11-19 09:46:35.798108] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:48.434 [2024-11-19 09:46:35.798149] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:48.434 [2024-11-19 09:46:35.798185] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:48.434 [2024-11-19 09:46:35.804146] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:18:48.434 [2024-11-19 09:46:35.858597] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:18:48.434 [2024-11-19 09:46:35.859607] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x6dff00:1 started. 00:18:48.434 [2024-11-19 09:46:35.861022] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:48.434 [2024-11-19 09:46:35.861087] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:48.434 [2024-11-19 09:46:35.861112] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:48.434 [2024-11-19 09:46:35.861129] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:18:48.434 [2024-11-19 09:46:35.861138] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:48.434 [2024-11-19 09:46:35.866827] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x6dff00 was disconnected and freed. delete nvme_qpair. 00:18:48.434 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:48.434 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:48.434 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:48.434 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:48.434 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:48.434 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.434 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:48.434 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.692 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:48.692 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:48.692 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77466 00:18:48.692 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77466 ']' 00:18:48.692 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77466 00:18:48.692 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:18:48.692 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.692 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77466 00:18:48.692 killing process with pid 77466 00:18:48.692 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.692 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.692 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77466' 00:18:48.692 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77466 00:18:48.692 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77466 00:18:48.693 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:48.693 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:48.693 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:48.951 rmmod nvme_tcp 00:18:48.951 rmmod nvme_fabrics 00:18:48.951 rmmod nvme_keyring 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77434 ']' 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77434 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77434 ']' 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77434 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77434 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:48.951 killing process with pid 77434 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77434' 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77434 00:18:48.951 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77434 00:18:49.209 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:49.209 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:49.209 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:49.209 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:18:49.209 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:18:49.209 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:49.209 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.210 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.469 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:18:49.469 00:18:49.469 real 0m14.567s 00:18:49.469 user 0m25.078s 00:18:49.469 sys 0m2.506s 00:18:49.469 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.469 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:49.469 ************************************ 00:18:49.469 END TEST nvmf_discovery_remove_ifc 00:18:49.469 ************************************ 00:18:49.469 09:46:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:49.469 09:46:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:49.469 09:46:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.469 09:46:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.469 ************************************ 00:18:49.469 START TEST nvmf_identify_kernel_target 00:18:49.469 ************************************ 00:18:49.469 09:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:49.469 * Looking for test storage... 00:18:49.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:49.469 09:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:49.469 09:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:49.469 09:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:49.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.469 --rc genhtml_branch_coverage=1 00:18:49.469 --rc genhtml_function_coverage=1 00:18:49.469 --rc genhtml_legend=1 00:18:49.469 --rc geninfo_all_blocks=1 00:18:49.469 --rc geninfo_unexecuted_blocks=1 00:18:49.469 00:18:49.469 ' 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:49.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.469 --rc genhtml_branch_coverage=1 00:18:49.469 --rc genhtml_function_coverage=1 00:18:49.469 --rc genhtml_legend=1 00:18:49.469 --rc geninfo_all_blocks=1 00:18:49.469 --rc geninfo_unexecuted_blocks=1 00:18:49.469 00:18:49.469 ' 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:49.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.469 --rc genhtml_branch_coverage=1 00:18:49.469 --rc genhtml_function_coverage=1 00:18:49.469 --rc genhtml_legend=1 00:18:49.469 --rc geninfo_all_blocks=1 00:18:49.469 --rc geninfo_unexecuted_blocks=1 00:18:49.469 00:18:49.469 ' 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:49.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.469 --rc genhtml_branch_coverage=1 00:18:49.469 --rc genhtml_function_coverage=1 00:18:49.469 --rc genhtml_legend=1 00:18:49.469 --rc geninfo_all_blocks=1 00:18:49.469 --rc geninfo_unexecuted_blocks=1 00:18:49.469 00:18:49.469 ' 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.469 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.470 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.470 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:49.728 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:49.728 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:49.729 Cannot find device "nvmf_init_br" 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:49.729 Cannot find device "nvmf_init_br2" 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:49.729 Cannot find device "nvmf_tgt_br" 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:49.729 Cannot find device "nvmf_tgt_br2" 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:49.729 Cannot find device "nvmf_init_br" 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:49.729 Cannot find device "nvmf_init_br2" 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:49.729 Cannot find device "nvmf_tgt_br" 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:49.729 Cannot find device "nvmf_tgt_br2" 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:49.729 Cannot find device "nvmf_br" 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:49.729 Cannot find device "nvmf_init_if" 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:49.729 Cannot find device "nvmf_init_if2" 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:49.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:49.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:49.729 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:49.987 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:49.987 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:18:49.987 00:18:49.987 --- 10.0.0.3 ping statistics --- 00:18:49.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.987 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:49.987 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:49.987 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:18:49.987 00:18:49.987 --- 10.0.0.4 ping statistics --- 00:18:49.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.987 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:49.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:49.987 00:18:49.987 --- 10.0.0.1 ping statistics --- 00:18:49.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.987 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:49.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:18:49.987 00:18:49.987 --- 10.0.0.2 ping statistics --- 00:18:49.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.987 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:49.987 09:46:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:50.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:50.503 Waiting for block devices as requested 00:18:50.503 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:50.503 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:50.503 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:50.503 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:50.503 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:50.503 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:50.503 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:50.503 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:50.503 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:50.503 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:50.503 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:50.760 No valid GPT data, bailing 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:50.760 No valid GPT data, bailing 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:50.760 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:50.761 No valid GPT data, bailing 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:50.761 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:51.019 No valid GPT data, bailing 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid=9203ba0c-8506-4f0b-a886-a7f874c4694c -a 10.0.0.1 -t tcp -s 4420 00:18:51.019 00:18:51.019 Discovery Log Number of Records 2, Generation counter 2 00:18:51.019 =====Discovery Log Entry 0====== 00:18:51.019 trtype: tcp 00:18:51.019 adrfam: ipv4 00:18:51.019 subtype: current discovery subsystem 00:18:51.019 treq: not specified, sq flow control disable supported 00:18:51.019 portid: 1 00:18:51.019 trsvcid: 4420 00:18:51.019 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:51.019 traddr: 10.0.0.1 00:18:51.019 eflags: none 00:18:51.019 sectype: none 00:18:51.019 =====Discovery Log Entry 1====== 00:18:51.019 trtype: tcp 00:18:51.019 adrfam: ipv4 00:18:51.019 subtype: nvme subsystem 00:18:51.019 treq: not specified, sq flow control disable supported 00:18:51.019 portid: 1 00:18:51.019 trsvcid: 4420 00:18:51.019 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:51.019 traddr: 10.0.0.1 00:18:51.019 eflags: none 00:18:51.019 sectype: none 00:18:51.019 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:51.019 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:51.278 ===================================================== 00:18:51.278 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:51.278 ===================================================== 00:18:51.278 Controller Capabilities/Features 00:18:51.278 ================================ 00:18:51.278 Vendor ID: 0000 00:18:51.278 Subsystem Vendor ID: 0000 00:18:51.278 Serial Number: 87296f8f94bd1736a1dc 00:18:51.278 Model Number: Linux 00:18:51.278 Firmware Version: 6.8.9-20 00:18:51.278 Recommended Arb Burst: 0 00:18:51.278 IEEE OUI Identifier: 00 00 00 00:18:51.278 Multi-path I/O 00:18:51.278 May have multiple subsystem ports: No 00:18:51.278 May have multiple controllers: No 00:18:51.278 Associated with SR-IOV VF: No 00:18:51.278 Max Data Transfer Size: Unlimited 00:18:51.278 Max Number of Namespaces: 0 00:18:51.278 Max Number of I/O Queues: 1024 00:18:51.278 NVMe Specification Version (VS): 1.3 00:18:51.278 NVMe Specification Version (Identify): 1.3 00:18:51.278 Maximum Queue Entries: 1024 00:18:51.279 Contiguous Queues Required: No 00:18:51.279 Arbitration Mechanisms Supported 00:18:51.279 Weighted Round Robin: Not Supported 00:18:51.279 Vendor Specific: Not Supported 00:18:51.279 Reset Timeout: 7500 ms 00:18:51.279 Doorbell Stride: 4 bytes 00:18:51.279 NVM Subsystem Reset: Not Supported 00:18:51.279 Command Sets Supported 00:18:51.279 NVM Command Set: Supported 00:18:51.279 Boot Partition: Not Supported 00:18:51.279 Memory Page Size Minimum: 4096 bytes 00:18:51.279 Memory Page Size Maximum: 4096 bytes 00:18:51.279 Persistent Memory Region: Not Supported 00:18:51.279 Optional Asynchronous Events Supported 00:18:51.279 Namespace Attribute Notices: Not Supported 00:18:51.279 Firmware Activation Notices: Not Supported 00:18:51.279 ANA Change Notices: Not Supported 00:18:51.279 PLE Aggregate Log Change Notices: Not Supported 00:18:51.279 LBA Status Info Alert Notices: Not Supported 00:18:51.279 EGE Aggregate Log Change Notices: Not Supported 00:18:51.279 Normal NVM Subsystem Shutdown event: Not Supported 00:18:51.279 Zone Descriptor Change Notices: Not Supported 00:18:51.279 Discovery Log Change Notices: Supported 00:18:51.279 Controller Attributes 00:18:51.279 128-bit Host Identifier: Not Supported 00:18:51.279 Non-Operational Permissive Mode: Not Supported 00:18:51.279 NVM Sets: Not Supported 00:18:51.279 Read Recovery Levels: Not Supported 00:18:51.279 Endurance Groups: Not Supported 00:18:51.279 Predictable Latency Mode: Not Supported 00:18:51.279 Traffic Based Keep ALive: Not Supported 00:18:51.279 Namespace Granularity: Not Supported 00:18:51.279 SQ Associations: Not Supported 00:18:51.279 UUID List: Not Supported 00:18:51.279 Multi-Domain Subsystem: Not Supported 00:18:51.279 Fixed Capacity Management: Not Supported 00:18:51.279 Variable Capacity Management: Not Supported 00:18:51.279 Delete Endurance Group: Not Supported 00:18:51.279 Delete NVM Set: Not Supported 00:18:51.279 Extended LBA Formats Supported: Not Supported 00:18:51.279 Flexible Data Placement Supported: Not Supported 00:18:51.279 00:18:51.279 Controller Memory Buffer Support 00:18:51.279 ================================ 00:18:51.279 Supported: No 00:18:51.279 00:18:51.279 Persistent Memory Region Support 00:18:51.279 ================================ 00:18:51.279 Supported: No 00:18:51.279 00:18:51.279 Admin Command Set Attributes 00:18:51.279 ============================ 00:18:51.279 Security Send/Receive: Not Supported 00:18:51.279 Format NVM: Not Supported 00:18:51.279 Firmware Activate/Download: Not Supported 00:18:51.279 Namespace Management: Not Supported 00:18:51.279 Device Self-Test: Not Supported 00:18:51.279 Directives: Not Supported 00:18:51.279 NVMe-MI: Not Supported 00:18:51.279 Virtualization Management: Not Supported 00:18:51.279 Doorbell Buffer Config: Not Supported 00:18:51.279 Get LBA Status Capability: Not Supported 00:18:51.279 Command & Feature Lockdown Capability: Not Supported 00:18:51.279 Abort Command Limit: 1 00:18:51.279 Async Event Request Limit: 1 00:18:51.279 Number of Firmware Slots: N/A 00:18:51.279 Firmware Slot 1 Read-Only: N/A 00:18:51.279 Firmware Activation Without Reset: N/A 00:18:51.279 Multiple Update Detection Support: N/A 00:18:51.279 Firmware Update Granularity: No Information Provided 00:18:51.279 Per-Namespace SMART Log: No 00:18:51.279 Asymmetric Namespace Access Log Page: Not Supported 00:18:51.279 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:51.279 Command Effects Log Page: Not Supported 00:18:51.279 Get Log Page Extended Data: Supported 00:18:51.279 Telemetry Log Pages: Not Supported 00:18:51.279 Persistent Event Log Pages: Not Supported 00:18:51.279 Supported Log Pages Log Page: May Support 00:18:51.279 Commands Supported & Effects Log Page: Not Supported 00:18:51.279 Feature Identifiers & Effects Log Page:May Support 00:18:51.279 NVMe-MI Commands & Effects Log Page: May Support 00:18:51.279 Data Area 4 for Telemetry Log: Not Supported 00:18:51.279 Error Log Page Entries Supported: 1 00:18:51.279 Keep Alive: Not Supported 00:18:51.279 00:18:51.279 NVM Command Set Attributes 00:18:51.279 ========================== 00:18:51.279 Submission Queue Entry Size 00:18:51.279 Max: 1 00:18:51.279 Min: 1 00:18:51.279 Completion Queue Entry Size 00:18:51.279 Max: 1 00:18:51.279 Min: 1 00:18:51.279 Number of Namespaces: 0 00:18:51.279 Compare Command: Not Supported 00:18:51.279 Write Uncorrectable Command: Not Supported 00:18:51.279 Dataset Management Command: Not Supported 00:18:51.279 Write Zeroes Command: Not Supported 00:18:51.279 Set Features Save Field: Not Supported 00:18:51.279 Reservations: Not Supported 00:18:51.279 Timestamp: Not Supported 00:18:51.279 Copy: Not Supported 00:18:51.279 Volatile Write Cache: Not Present 00:18:51.279 Atomic Write Unit (Normal): 1 00:18:51.279 Atomic Write Unit (PFail): 1 00:18:51.279 Atomic Compare & Write Unit: 1 00:18:51.279 Fused Compare & Write: Not Supported 00:18:51.279 Scatter-Gather List 00:18:51.279 SGL Command Set: Supported 00:18:51.279 SGL Keyed: Not Supported 00:18:51.279 SGL Bit Bucket Descriptor: Not Supported 00:18:51.279 SGL Metadata Pointer: Not Supported 00:18:51.279 Oversized SGL: Not Supported 00:18:51.279 SGL Metadata Address: Not Supported 00:18:51.279 SGL Offset: Supported 00:18:51.279 Transport SGL Data Block: Not Supported 00:18:51.279 Replay Protected Memory Block: Not Supported 00:18:51.279 00:18:51.279 Firmware Slot Information 00:18:51.279 ========================= 00:18:51.279 Active slot: 0 00:18:51.279 00:18:51.279 00:18:51.279 Error Log 00:18:51.279 ========= 00:18:51.279 00:18:51.279 Active Namespaces 00:18:51.279 ================= 00:18:51.279 Discovery Log Page 00:18:51.279 ================== 00:18:51.279 Generation Counter: 2 00:18:51.279 Number of Records: 2 00:18:51.279 Record Format: 0 00:18:51.279 00:18:51.279 Discovery Log Entry 0 00:18:51.279 ---------------------- 00:18:51.279 Transport Type: 3 (TCP) 00:18:51.279 Address Family: 1 (IPv4) 00:18:51.279 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:51.279 Entry Flags: 00:18:51.279 Duplicate Returned Information: 0 00:18:51.279 Explicit Persistent Connection Support for Discovery: 0 00:18:51.279 Transport Requirements: 00:18:51.279 Secure Channel: Not Specified 00:18:51.279 Port ID: 1 (0x0001) 00:18:51.279 Controller ID: 65535 (0xffff) 00:18:51.279 Admin Max SQ Size: 32 00:18:51.279 Transport Service Identifier: 4420 00:18:51.279 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:51.279 Transport Address: 10.0.0.1 00:18:51.279 Discovery Log Entry 1 00:18:51.279 ---------------------- 00:18:51.279 Transport Type: 3 (TCP) 00:18:51.279 Address Family: 1 (IPv4) 00:18:51.279 Subsystem Type: 2 (NVM Subsystem) 00:18:51.279 Entry Flags: 00:18:51.279 Duplicate Returned Information: 0 00:18:51.279 Explicit Persistent Connection Support for Discovery: 0 00:18:51.279 Transport Requirements: 00:18:51.279 Secure Channel: Not Specified 00:18:51.279 Port ID: 1 (0x0001) 00:18:51.279 Controller ID: 65535 (0xffff) 00:18:51.279 Admin Max SQ Size: 32 00:18:51.279 Transport Service Identifier: 4420 00:18:51.279 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:51.279 Transport Address: 10.0.0.1 00:18:51.279 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:51.279 get_feature(0x01) failed 00:18:51.279 get_feature(0x02) failed 00:18:51.279 get_feature(0x04) failed 00:18:51.279 ===================================================== 00:18:51.279 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:51.279 ===================================================== 00:18:51.279 Controller Capabilities/Features 00:18:51.279 ================================ 00:18:51.279 Vendor ID: 0000 00:18:51.279 Subsystem Vendor ID: 0000 00:18:51.279 Serial Number: 96e43dc47daf8e8d19b4 00:18:51.279 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:51.279 Firmware Version: 6.8.9-20 00:18:51.279 Recommended Arb Burst: 6 00:18:51.279 IEEE OUI Identifier: 00 00 00 00:18:51.279 Multi-path I/O 00:18:51.279 May have multiple subsystem ports: Yes 00:18:51.279 May have multiple controllers: Yes 00:18:51.279 Associated with SR-IOV VF: No 00:18:51.279 Max Data Transfer Size: Unlimited 00:18:51.279 Max Number of Namespaces: 1024 00:18:51.279 Max Number of I/O Queues: 128 00:18:51.279 NVMe Specification Version (VS): 1.3 00:18:51.279 NVMe Specification Version (Identify): 1.3 00:18:51.279 Maximum Queue Entries: 1024 00:18:51.280 Contiguous Queues Required: No 00:18:51.280 Arbitration Mechanisms Supported 00:18:51.280 Weighted Round Robin: Not Supported 00:18:51.280 Vendor Specific: Not Supported 00:18:51.280 Reset Timeout: 7500 ms 00:18:51.280 Doorbell Stride: 4 bytes 00:18:51.280 NVM Subsystem Reset: Not Supported 00:18:51.280 Command Sets Supported 00:18:51.280 NVM Command Set: Supported 00:18:51.280 Boot Partition: Not Supported 00:18:51.280 Memory Page Size Minimum: 4096 bytes 00:18:51.280 Memory Page Size Maximum: 4096 bytes 00:18:51.280 Persistent Memory Region: Not Supported 00:18:51.280 Optional Asynchronous Events Supported 00:18:51.280 Namespace Attribute Notices: Supported 00:18:51.280 Firmware Activation Notices: Not Supported 00:18:51.280 ANA Change Notices: Supported 00:18:51.280 PLE Aggregate Log Change Notices: Not Supported 00:18:51.280 LBA Status Info Alert Notices: Not Supported 00:18:51.280 EGE Aggregate Log Change Notices: Not Supported 00:18:51.280 Normal NVM Subsystem Shutdown event: Not Supported 00:18:51.280 Zone Descriptor Change Notices: Not Supported 00:18:51.280 Discovery Log Change Notices: Not Supported 00:18:51.280 Controller Attributes 00:18:51.280 128-bit Host Identifier: Supported 00:18:51.280 Non-Operational Permissive Mode: Not Supported 00:18:51.280 NVM Sets: Not Supported 00:18:51.280 Read Recovery Levels: Not Supported 00:18:51.280 Endurance Groups: Not Supported 00:18:51.280 Predictable Latency Mode: Not Supported 00:18:51.280 Traffic Based Keep ALive: Supported 00:18:51.280 Namespace Granularity: Not Supported 00:18:51.280 SQ Associations: Not Supported 00:18:51.280 UUID List: Not Supported 00:18:51.280 Multi-Domain Subsystem: Not Supported 00:18:51.280 Fixed Capacity Management: Not Supported 00:18:51.280 Variable Capacity Management: Not Supported 00:18:51.280 Delete Endurance Group: Not Supported 00:18:51.280 Delete NVM Set: Not Supported 00:18:51.280 Extended LBA Formats Supported: Not Supported 00:18:51.280 Flexible Data Placement Supported: Not Supported 00:18:51.280 00:18:51.280 Controller Memory Buffer Support 00:18:51.280 ================================ 00:18:51.280 Supported: No 00:18:51.280 00:18:51.280 Persistent Memory Region Support 00:18:51.280 ================================ 00:18:51.280 Supported: No 00:18:51.280 00:18:51.280 Admin Command Set Attributes 00:18:51.280 ============================ 00:18:51.280 Security Send/Receive: Not Supported 00:18:51.280 Format NVM: Not Supported 00:18:51.280 Firmware Activate/Download: Not Supported 00:18:51.280 Namespace Management: Not Supported 00:18:51.280 Device Self-Test: Not Supported 00:18:51.280 Directives: Not Supported 00:18:51.280 NVMe-MI: Not Supported 00:18:51.280 Virtualization Management: Not Supported 00:18:51.280 Doorbell Buffer Config: Not Supported 00:18:51.280 Get LBA Status Capability: Not Supported 00:18:51.280 Command & Feature Lockdown Capability: Not Supported 00:18:51.280 Abort Command Limit: 4 00:18:51.280 Async Event Request Limit: 4 00:18:51.280 Number of Firmware Slots: N/A 00:18:51.280 Firmware Slot 1 Read-Only: N/A 00:18:51.280 Firmware Activation Without Reset: N/A 00:18:51.280 Multiple Update Detection Support: N/A 00:18:51.280 Firmware Update Granularity: No Information Provided 00:18:51.280 Per-Namespace SMART Log: Yes 00:18:51.280 Asymmetric Namespace Access Log Page: Supported 00:18:51.280 ANA Transition Time : 10 sec 00:18:51.280 00:18:51.280 Asymmetric Namespace Access Capabilities 00:18:51.280 ANA Optimized State : Supported 00:18:51.280 ANA Non-Optimized State : Supported 00:18:51.280 ANA Inaccessible State : Supported 00:18:51.280 ANA Persistent Loss State : Supported 00:18:51.280 ANA Change State : Supported 00:18:51.280 ANAGRPID is not changed : No 00:18:51.280 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:51.280 00:18:51.280 ANA Group Identifier Maximum : 128 00:18:51.280 Number of ANA Group Identifiers : 128 00:18:51.280 Max Number of Allowed Namespaces : 1024 00:18:51.280 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:51.280 Command Effects Log Page: Supported 00:18:51.280 Get Log Page Extended Data: Supported 00:18:51.280 Telemetry Log Pages: Not Supported 00:18:51.280 Persistent Event Log Pages: Not Supported 00:18:51.280 Supported Log Pages Log Page: May Support 00:18:51.280 Commands Supported & Effects Log Page: Not Supported 00:18:51.280 Feature Identifiers & Effects Log Page:May Support 00:18:51.280 NVMe-MI Commands & Effects Log Page: May Support 00:18:51.280 Data Area 4 for Telemetry Log: Not Supported 00:18:51.280 Error Log Page Entries Supported: 128 00:18:51.280 Keep Alive: Supported 00:18:51.280 Keep Alive Granularity: 1000 ms 00:18:51.280 00:18:51.280 NVM Command Set Attributes 00:18:51.280 ========================== 00:18:51.280 Submission Queue Entry Size 00:18:51.280 Max: 64 00:18:51.280 Min: 64 00:18:51.280 Completion Queue Entry Size 00:18:51.280 Max: 16 00:18:51.280 Min: 16 00:18:51.280 Number of Namespaces: 1024 00:18:51.280 Compare Command: Not Supported 00:18:51.280 Write Uncorrectable Command: Not Supported 00:18:51.280 Dataset Management Command: Supported 00:18:51.280 Write Zeroes Command: Supported 00:18:51.280 Set Features Save Field: Not Supported 00:18:51.280 Reservations: Not Supported 00:18:51.280 Timestamp: Not Supported 00:18:51.280 Copy: Not Supported 00:18:51.280 Volatile Write Cache: Present 00:18:51.280 Atomic Write Unit (Normal): 1 00:18:51.280 Atomic Write Unit (PFail): 1 00:18:51.280 Atomic Compare & Write Unit: 1 00:18:51.280 Fused Compare & Write: Not Supported 00:18:51.280 Scatter-Gather List 00:18:51.280 SGL Command Set: Supported 00:18:51.280 SGL Keyed: Not Supported 00:18:51.280 SGL Bit Bucket Descriptor: Not Supported 00:18:51.280 SGL Metadata Pointer: Not Supported 00:18:51.280 Oversized SGL: Not Supported 00:18:51.280 SGL Metadata Address: Not Supported 00:18:51.280 SGL Offset: Supported 00:18:51.280 Transport SGL Data Block: Not Supported 00:18:51.280 Replay Protected Memory Block: Not Supported 00:18:51.280 00:18:51.280 Firmware Slot Information 00:18:51.280 ========================= 00:18:51.280 Active slot: 0 00:18:51.280 00:18:51.280 Asymmetric Namespace Access 00:18:51.280 =========================== 00:18:51.280 Change Count : 0 00:18:51.280 Number of ANA Group Descriptors : 1 00:18:51.280 ANA Group Descriptor : 0 00:18:51.280 ANA Group ID : 1 00:18:51.280 Number of NSID Values : 1 00:18:51.280 Change Count : 0 00:18:51.280 ANA State : 1 00:18:51.280 Namespace Identifier : 1 00:18:51.280 00:18:51.280 Commands Supported and Effects 00:18:51.280 ============================== 00:18:51.280 Admin Commands 00:18:51.280 -------------- 00:18:51.280 Get Log Page (02h): Supported 00:18:51.280 Identify (06h): Supported 00:18:51.280 Abort (08h): Supported 00:18:51.280 Set Features (09h): Supported 00:18:51.280 Get Features (0Ah): Supported 00:18:51.280 Asynchronous Event Request (0Ch): Supported 00:18:51.280 Keep Alive (18h): Supported 00:18:51.280 I/O Commands 00:18:51.280 ------------ 00:18:51.280 Flush (00h): Supported 00:18:51.280 Write (01h): Supported LBA-Change 00:18:51.280 Read (02h): Supported 00:18:51.280 Write Zeroes (08h): Supported LBA-Change 00:18:51.280 Dataset Management (09h): Supported 00:18:51.280 00:18:51.280 Error Log 00:18:51.280 ========= 00:18:51.280 Entry: 0 00:18:51.280 Error Count: 0x3 00:18:51.280 Submission Queue Id: 0x0 00:18:51.280 Command Id: 0x5 00:18:51.280 Phase Bit: 0 00:18:51.280 Status Code: 0x2 00:18:51.280 Status Code Type: 0x0 00:18:51.280 Do Not Retry: 1 00:18:51.280 Error Location: 0x28 00:18:51.280 LBA: 0x0 00:18:51.280 Namespace: 0x0 00:18:51.280 Vendor Log Page: 0x0 00:18:51.280 ----------- 00:18:51.280 Entry: 1 00:18:51.280 Error Count: 0x2 00:18:51.280 Submission Queue Id: 0x0 00:18:51.280 Command Id: 0x5 00:18:51.280 Phase Bit: 0 00:18:51.280 Status Code: 0x2 00:18:51.280 Status Code Type: 0x0 00:18:51.280 Do Not Retry: 1 00:18:51.280 Error Location: 0x28 00:18:51.280 LBA: 0x0 00:18:51.280 Namespace: 0x0 00:18:51.280 Vendor Log Page: 0x0 00:18:51.280 ----------- 00:18:51.280 Entry: 2 00:18:51.280 Error Count: 0x1 00:18:51.280 Submission Queue Id: 0x0 00:18:51.280 Command Id: 0x4 00:18:51.280 Phase Bit: 0 00:18:51.280 Status Code: 0x2 00:18:51.280 Status Code Type: 0x0 00:18:51.280 Do Not Retry: 1 00:18:51.280 Error Location: 0x28 00:18:51.280 LBA: 0x0 00:18:51.280 Namespace: 0x0 00:18:51.280 Vendor Log Page: 0x0 00:18:51.280 00:18:51.281 Number of Queues 00:18:51.281 ================ 00:18:51.281 Number of I/O Submission Queues: 128 00:18:51.281 Number of I/O Completion Queues: 128 00:18:51.281 00:18:51.281 ZNS Specific Controller Data 00:18:51.281 ============================ 00:18:51.281 Zone Append Size Limit: 0 00:18:51.281 00:18:51.281 00:18:51.281 Active Namespaces 00:18:51.281 ================= 00:18:51.281 get_feature(0x05) failed 00:18:51.281 Namespace ID:1 00:18:51.281 Command Set Identifier: NVM (00h) 00:18:51.281 Deallocate: Supported 00:18:51.281 Deallocated/Unwritten Error: Not Supported 00:18:51.281 Deallocated Read Value: Unknown 00:18:51.281 Deallocate in Write Zeroes: Not Supported 00:18:51.281 Deallocated Guard Field: 0xFFFF 00:18:51.281 Flush: Supported 00:18:51.281 Reservation: Not Supported 00:18:51.281 Namespace Sharing Capabilities: Multiple Controllers 00:18:51.281 Size (in LBAs): 1310720 (5GiB) 00:18:51.281 Capacity (in LBAs): 1310720 (5GiB) 00:18:51.281 Utilization (in LBAs): 1310720 (5GiB) 00:18:51.281 UUID: 5dd1a778-9cd0-4006-8774-9f88024ccd34 00:18:51.281 Thin Provisioning: Not Supported 00:18:51.281 Per-NS Atomic Units: Yes 00:18:51.281 Atomic Boundary Size (Normal): 0 00:18:51.281 Atomic Boundary Size (PFail): 0 00:18:51.281 Atomic Boundary Offset: 0 00:18:51.281 NGUID/EUI64 Never Reused: No 00:18:51.281 ANA group ID: 1 00:18:51.281 Namespace Write Protected: No 00:18:51.281 Number of LBA Formats: 1 00:18:51.281 Current LBA Format: LBA Format #00 00:18:51.281 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:51.281 00:18:51.281 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:51.281 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:51.281 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:51.539 rmmod nvme_tcp 00:18:51.539 rmmod nvme_fabrics 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:51.539 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:51.540 09:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:51.540 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:51.540 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:51.540 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:51.540 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:51.540 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:51.540 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:51.540 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:51.540 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:51.540 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:51.540 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:51.540 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:51.540 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.540 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.540 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.802 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:18:51.802 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:51.802 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:51.802 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:18:51.802 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:51.802 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:51.802 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:51.802 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:51.802 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:51.802 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:51.802 09:46:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:52.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:52.628 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:52.628 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:52.628 00:18:52.628 real 0m3.201s 00:18:52.628 user 0m1.153s 00:18:52.628 sys 0m1.461s 00:18:52.628 ************************************ 00:18:52.628 END TEST nvmf_identify_kernel_target 00:18:52.628 ************************************ 00:18:52.628 09:46:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.628 09:46:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.628 09:46:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:52.628 09:46:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:52.628 09:46:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.628 09:46:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.628 ************************************ 00:18:52.628 START TEST nvmf_auth_host 00:18:52.628 ************************************ 00:18:52.628 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:52.628 * Looking for test storage... 00:18:52.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:52.628 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:52.628 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:18:52.628 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:52.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.888 --rc genhtml_branch_coverage=1 00:18:52.888 --rc genhtml_function_coverage=1 00:18:52.888 --rc genhtml_legend=1 00:18:52.888 --rc geninfo_all_blocks=1 00:18:52.888 --rc geninfo_unexecuted_blocks=1 00:18:52.888 00:18:52.888 ' 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:52.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.888 --rc genhtml_branch_coverage=1 00:18:52.888 --rc genhtml_function_coverage=1 00:18:52.888 --rc genhtml_legend=1 00:18:52.888 --rc geninfo_all_blocks=1 00:18:52.888 --rc geninfo_unexecuted_blocks=1 00:18:52.888 00:18:52.888 ' 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:52.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.888 --rc genhtml_branch_coverage=1 00:18:52.888 --rc genhtml_function_coverage=1 00:18:52.888 --rc genhtml_legend=1 00:18:52.888 --rc geninfo_all_blocks=1 00:18:52.888 --rc geninfo_unexecuted_blocks=1 00:18:52.888 00:18:52.888 ' 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:52.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.888 --rc genhtml_branch_coverage=1 00:18:52.888 --rc genhtml_function_coverage=1 00:18:52.888 --rc genhtml_legend=1 00:18:52.888 --rc geninfo_all_blocks=1 00:18:52.888 --rc geninfo_unexecuted_blocks=1 00:18:52.888 00:18:52.888 ' 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:52.888 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:52.888 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:52.889 Cannot find device "nvmf_init_br" 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:52.889 Cannot find device "nvmf_init_br2" 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:52.889 Cannot find device "nvmf_tgt_br" 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:52.889 Cannot find device "nvmf_tgt_br2" 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:52.889 Cannot find device "nvmf_init_br" 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:52.889 Cannot find device "nvmf_init_br2" 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:52.889 Cannot find device "nvmf_tgt_br" 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:52.889 Cannot find device "nvmf_tgt_br2" 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:52.889 Cannot find device "nvmf_br" 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:52.889 Cannot find device "nvmf_init_if" 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:18:52.889 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:53.147 Cannot find device "nvmf_init_if2" 00:18:53.147 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:18:53.147 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:53.147 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:53.147 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:18:53.147 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:53.147 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:53.147 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:18:53.147 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:53.148 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:53.407 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:53.407 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:18:53.407 00:18:53.407 --- 10.0.0.3 ping statistics --- 00:18:53.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.407 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:53.407 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:53.407 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.123 ms 00:18:53.407 00:18:53.407 --- 10.0.0.4 ping statistics --- 00:18:53.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.407 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:53.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:18:53.407 00:18:53.407 --- 10.0.0.1 ping statistics --- 00:18:53.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.407 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:53.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:18:53.407 00:18:53.407 --- 10.0.0.2 ping statistics --- 00:18:53.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.407 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78461 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78461 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78461 ']' 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.407 09:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1147a813a487be8249a1fd567e83c2b0 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.KfW 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1147a813a487be8249a1fd567e83c2b0 0 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1147a813a487be8249a1fd567e83c2b0 0 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1147a813a487be8249a1fd567e83c2b0 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:53.666 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.KfW 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.KfW 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.KfW 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=536a39caed1f805a94a26b0fab9fe503574576ce78cba5b8952f68b67a42ed92 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GGK 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 536a39caed1f805a94a26b0fab9fe503574576ce78cba5b8952f68b67a42ed92 3 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 536a39caed1f805a94a26b0fab9fe503574576ce78cba5b8952f68b67a42ed92 3 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=536a39caed1f805a94a26b0fab9fe503574576ce78cba5b8952f68b67a42ed92 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GGK 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GGK 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.GGK 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=44855dd09f78c2dfc78630bde8c0217aa30dc2043d98e625 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.bmg 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 44855dd09f78c2dfc78630bde8c0217aa30dc2043d98e625 0 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 44855dd09f78c2dfc78630bde8c0217aa30dc2043d98e625 0 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=44855dd09f78c2dfc78630bde8c0217aa30dc2043d98e625 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.bmg 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.bmg 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.bmg 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:53.924 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d9e6b1c3366e416c6f4d84a31079b56b675354de92a5ae3b 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zOG 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d9e6b1c3366e416c6f4d84a31079b56b675354de92a5ae3b 2 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d9e6b1c3366e416c6f4d84a31079b56b675354de92a5ae3b 2 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d9e6b1c3366e416c6f4d84a31079b56b675354de92a5ae3b 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zOG 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zOG 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zOG 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f45aebe777b2823493132420f4182272 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.QQb 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f45aebe777b2823493132420f4182272 1 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f45aebe777b2823493132420f4182272 1 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f45aebe777b2823493132420f4182272 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:53.925 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.QQb 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.QQb 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.QQb 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2146a59603ba4395e55142b09f23e5ad 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.aZJ 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2146a59603ba4395e55142b09f23e5ad 1 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2146a59603ba4395e55142b09f23e5ad 1 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2146a59603ba4395e55142b09f23e5ad 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.aZJ 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.aZJ 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.aZJ 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c68e88565f3cc25ef596138a2a0c268d25e5d3c33308e1e8 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.oXe 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c68e88565f3cc25ef596138a2a0c268d25e5d3c33308e1e8 2 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c68e88565f3cc25ef596138a2a0c268d25e5d3c33308e1e8 2 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c68e88565f3cc25ef596138a2a0c268d25e5d3c33308e1e8 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.oXe 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.oXe 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.oXe 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ce930b20cb1edcd53e845ced857181d5 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ZCX 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ce930b20cb1edcd53e845ced857181d5 0 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ce930b20cb1edcd53e845ced857181d5 0 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ce930b20cb1edcd53e845ced857181d5 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ZCX 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ZCX 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ZCX 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=189a74f263806e23ada1ab50577009f9c5f09cfc88cae59a40198e882a11bcb1 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:54.184 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.t6f 00:18:54.185 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 189a74f263806e23ada1ab50577009f9c5f09cfc88cae59a40198e882a11bcb1 3 00:18:54.185 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 189a74f263806e23ada1ab50577009f9c5f09cfc88cae59a40198e882a11bcb1 3 00:18:54.185 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.185 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.185 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=189a74f263806e23ada1ab50577009f9c5f09cfc88cae59a40198e882a11bcb1 00:18:54.185 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:54.185 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:54.185 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.t6f 00:18:54.443 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.t6f 00:18:54.443 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.t6f 00:18:54.443 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:54.443 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78461 00:18:54.443 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78461 ']' 00:18:54.443 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.443 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.443 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.443 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.443 09:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.701 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.701 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:18:54.701 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KfW 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.GGK ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GGK 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.bmg 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zOG ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zOG 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.QQb 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.aZJ ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aZJ 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.oXe 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ZCX ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ZCX 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.t6f 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:54.702 09:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:54.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:55.217 Waiting for block devices as requested 00:18:55.218 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:55.218 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:55.784 No valid GPT data, bailing 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:55.784 No valid GPT data, bailing 00:18:55.784 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:56.043 No valid GPT data, bailing 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:56.043 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:56.044 No valid GPT data, bailing 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid=9203ba0c-8506-4f0b-a886-a7f874c4694c -a 10.0.0.1 -t tcp -s 4420 00:18:56.044 00:18:56.044 Discovery Log Number of Records 2, Generation counter 2 00:18:56.044 =====Discovery Log Entry 0====== 00:18:56.044 trtype: tcp 00:18:56.044 adrfam: ipv4 00:18:56.044 subtype: current discovery subsystem 00:18:56.044 treq: not specified, sq flow control disable supported 00:18:56.044 portid: 1 00:18:56.044 trsvcid: 4420 00:18:56.044 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:56.044 traddr: 10.0.0.1 00:18:56.044 eflags: none 00:18:56.044 sectype: none 00:18:56.044 =====Discovery Log Entry 1====== 00:18:56.044 trtype: tcp 00:18:56.044 adrfam: ipv4 00:18:56.044 subtype: nvme subsystem 00:18:56.044 treq: not specified, sq flow control disable supported 00:18:56.044 portid: 1 00:18:56.044 trsvcid: 4420 00:18:56.044 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:56.044 traddr: 10.0.0.1 00:18:56.044 eflags: none 00:18:56.044 sectype: none 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:56.044 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:56.303 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.304 nvme0n1 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.304 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.563 09:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.563 nvme0n1 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.563 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.564 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.564 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:56.564 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:56.564 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:56.564 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.564 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.564 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:56.564 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.564 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:56.564 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:56.564 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:56.564 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.564 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.564 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.822 nvme0n1 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:18:56.822 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.823 nvme0n1 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.823 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.082 nvme0n1 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.082 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.083 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.083 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:57.083 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:57.083 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:57.083 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.083 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.083 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:57.083 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.083 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:57.083 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:57.083 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:57.083 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:57.083 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.083 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.410 nvme0n1 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:57.410 09:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.679 nvme0n1 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.679 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.938 nvme0n1 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.938 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:57.939 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.939 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:57.939 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:57.939 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:57.939 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.939 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.939 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.198 nvme0n1 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.198 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.458 nvme0n1 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:58.458 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.459 09:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.718 nvme0n1 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:58.718 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.284 09:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.543 nvme0n1 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.543 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.802 nvme0n1 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:59.802 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.803 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.803 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.061 nvme0n1 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.061 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.320 nvme0n1 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.320 09:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.578 nvme0n1 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:00.578 09:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.525 09:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.783 nvme0n1 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.783 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.349 nvme0n1 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.349 09:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.607 nvme0n1 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.607 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.174 nvme0n1 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.174 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.432 nvme0n1 00:19:04.432 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.432 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.432 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.432 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.432 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.432 09:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.432 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.432 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.432 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.432 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.432 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.432 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.432 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.433 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:04.690 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:04.690 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:04.690 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.690 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.690 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:04.691 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.691 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:04.691 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:04.691 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:04.691 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.691 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.691 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.258 nvme0n1 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.258 09:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.824 nvme0n1 00:19:05.824 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.824 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.824 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.824 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.824 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.824 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.824 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.824 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.824 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.824 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.083 09:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.650 nvme0n1 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.650 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.218 nvme0n1 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.218 09:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.155 nvme0n1 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:08.155 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.156 nvme0n1 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.156 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.415 nvme0n1 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:08.415 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.416 nvme0n1 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.416 09:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.416 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.675 nvme0n1 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:08.675 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.676 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.935 nvme0n1 00:19:08.935 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.935 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.935 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.935 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.935 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.935 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.935 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.936 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.195 nvme0n1 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.195 nvme0n1 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.195 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.454 nvme0n1 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.454 09:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.454 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.712 nvme0n1 00:19:09.712 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.712 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.712 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.712 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:09.712 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.712 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.712 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.712 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.713 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.972 nvme0n1 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.972 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.244 nvme0n1 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.244 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.502 nvme0n1 00:19:10.502 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.502 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.502 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.502 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:10.502 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.502 09:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:10.502 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.503 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.761 nvme0n1 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.762 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.021 nvme0n1 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.021 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.022 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.281 nvme0n1 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.281 09:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.848 nvme0n1 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:11.848 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.849 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.107 nvme0n1 00:19:12.107 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.107 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.107 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.107 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.107 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.107 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.365 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.365 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.365 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.365 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.365 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.366 09:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.626 nvme0n1 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.626 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.195 nvme0n1 00:19:13.195 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.195 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.195 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.195 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.195 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.195 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.195 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.195 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.195 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.195 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.195 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.195 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.195 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:13.195 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.196 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.455 nvme0n1 00:19:13.455 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.455 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.455 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.455 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.455 09:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.455 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.455 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.455 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.455 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.455 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.455 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.455 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.455 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.455 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:13.455 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.455 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:13.455 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:13.455 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:13.455 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:13.456 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:13.715 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:13.715 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.715 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.715 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:13.715 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.715 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:13.715 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:13.715 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:13.715 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.715 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.715 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.284 nvme0n1 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.285 09:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.852 nvme0n1 00:19:14.852 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.852 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.852 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.852 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.852 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.852 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.852 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.852 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.852 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.852 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.852 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.852 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.852 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.853 09:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.420 nvme0n1 00:19:15.420 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.420 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.420 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.420 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.420 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.421 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:15.679 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.680 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.247 nvme0n1 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:16.247 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.248 09:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.839 nvme0n1 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.839 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.099 nvme0n1 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:17.099 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.100 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.359 nvme0n1 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.359 nvme0n1 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.359 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.619 09:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.619 nvme0n1 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.619 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.620 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.879 nvme0n1 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:17.879 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.880 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.139 nvme0n1 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.140 nvme0n1 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.140 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.400 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.400 nvme0n1 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.401 09:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.660 nvme0n1 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.660 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.661 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.920 nvme0n1 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.921 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.181 nvme0n1 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.181 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.182 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.441 nvme0n1 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.441 09:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.700 nvme0n1 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.700 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.701 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.960 nvme0n1 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.960 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.219 nvme0n1 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.220 09:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.787 nvme0n1 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.787 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.046 nvme0n1 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.046 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.305 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:21.306 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.306 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:21.306 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:21.306 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:21.306 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.306 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.306 09:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.564 nvme0n1 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.564 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.158 nvme0n1 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.158 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.430 nvme0n1 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE0N2E4MTNhNDg3YmU4MjQ5YTFmZDU2N2U4M2MyYjA/mFNG: 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: ]] 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTM2YTM5Y2FlZDFmODA1YTk0YTI2YjBmYWI5ZmU1MDM1NzQ1NzZjZTc4Y2JhNWI4OTUyZjY4YjY3YTQyZWQ5MgpSCvc=: 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.430 09:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.997 nvme0n1 00:19:22.997 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.997 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.997 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:22.997 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.997 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.997 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.997 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.997 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.997 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.997 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:23.256 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:23.257 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:23.257 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.257 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.257 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:23.257 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.257 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:23.257 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:23.257 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:23.257 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.257 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.257 09:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.824 nvme0n1 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:23.824 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.825 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.825 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:23.825 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.825 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:23.825 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:23.825 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:23.825 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.825 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.825 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.392 nvme0n1 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY4ZTg4NTY1ZjNjYzI1ZWY1OTYxMzhhMmEwYzI2OGQyNWU1ZDNjMzMzMDhlMWU4CrVhZg==: 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: ]] 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5MzBiMjBjYjFlZGNkNTNlODQ1Y2VkODU3MTgxZDXGKAMl: 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.392 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.328 nvme0n1 00:19:25.328 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.328 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.328 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.328 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.328 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.328 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTg5YTc0ZjI2MzgwNmUyM2FkYTFhYjUwNTc3MDA5ZjljNWYwOWNmYzg4Y2FlNTlhNDAxOThlODgyYTExYmNiMTcrnxg=: 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.329 09:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.897 nvme0n1 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.897 request: 00:19:25.897 { 00:19:25.897 "name": "nvme0", 00:19:25.897 "trtype": "tcp", 00:19:25.897 "traddr": "10.0.0.1", 00:19:25.897 "adrfam": "ipv4", 00:19:25.897 "trsvcid": "4420", 00:19:25.897 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:25.897 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:25.897 "prchk_reftag": false, 00:19:25.897 "prchk_guard": false, 00:19:25.897 "hdgst": false, 00:19:25.897 "ddgst": false, 00:19:25.897 "allow_unrecognized_csi": false, 00:19:25.897 "method": "bdev_nvme_attach_controller", 00:19:25.897 "req_id": 1 00:19:25.897 } 00:19:25.897 Got JSON-RPC error response 00:19:25.897 response: 00:19:25.897 { 00:19:25.897 "code": -5, 00:19:25.897 "message": "Input/output error" 00:19:25.897 } 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:25.897 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.898 request: 00:19:25.898 { 00:19:25.898 "name": "nvme0", 00:19:25.898 "trtype": "tcp", 00:19:25.898 "traddr": "10.0.0.1", 00:19:25.898 "adrfam": "ipv4", 00:19:25.898 "trsvcid": "4420", 00:19:25.898 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:25.898 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:25.898 "prchk_reftag": false, 00:19:25.898 "prchk_guard": false, 00:19:25.898 "hdgst": false, 00:19:25.898 "ddgst": false, 00:19:25.898 "dhchap_key": "key2", 00:19:25.898 "allow_unrecognized_csi": false, 00:19:25.898 "method": "bdev_nvme_attach_controller", 00:19:25.898 "req_id": 1 00:19:25.898 } 00:19:25.898 Got JSON-RPC error response 00:19:25.898 response: 00:19:25.898 { 00:19:25.898 "code": -5, 00:19:25.898 "message": "Input/output error" 00:19:25.898 } 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.898 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.157 request: 00:19:26.157 { 00:19:26.157 "name": "nvme0", 00:19:26.157 "trtype": "tcp", 00:19:26.157 "traddr": "10.0.0.1", 00:19:26.157 "adrfam": "ipv4", 00:19:26.157 "trsvcid": "4420", 00:19:26.157 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:26.157 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:26.157 "prchk_reftag": false, 00:19:26.157 "prchk_guard": false, 00:19:26.157 "hdgst": false, 00:19:26.157 "ddgst": false, 00:19:26.157 "dhchap_key": "key1", 00:19:26.157 "dhchap_ctrlr_key": "ckey2", 00:19:26.157 "allow_unrecognized_csi": false, 00:19:26.157 "method": "bdev_nvme_attach_controller", 00:19:26.157 "req_id": 1 00:19:26.157 } 00:19:26.157 Got JSON-RPC error response 00:19:26.157 response: 00:19:26.157 { 00:19:26.157 "code": -5, 00:19:26.157 "message": "Input/output error" 00:19:26.157 } 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.157 nvme0n1 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.157 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.416 request: 00:19:26.416 { 00:19:26.416 "name": "nvme0", 00:19:26.416 "dhchap_key": "key1", 00:19:26.416 "dhchap_ctrlr_key": "ckey2", 00:19:26.416 "method": "bdev_nvme_set_keys", 00:19:26.416 "req_id": 1 00:19:26.416 } 00:19:26.416 Got JSON-RPC error response 00:19:26.416 response: 00:19:26.416 { 00:19:26.416 "code": -5, 00:19:26.416 "message": "Input/output error" 00:19:26.416 } 00:19:26.416 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:26.416 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:26.416 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.416 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.416 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.416 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.416 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:19:26.416 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.416 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.416 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.416 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:19:26.416 09:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ4NTVkZDA5Zjc4YzJkZmM3ODYzMGJkZThjMDIxN2FhMzBkYzIwNDNkOThlNjI1epBSfw==: 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: ]] 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDllNmIxYzMzNjZlNDE2YzZmNGQ4NGEzMTA3OWI1NmI2NzUzNTRkZTkyYTVhZTNi7H0nyA==: 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.354 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.621 nvme0n1 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ1YWViZTc3N2IyODIzNDkzMTMyNDIwZjQxODIyNzLy8g5n: 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: ]] 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjE0NmE1OTYwM2JhNDM5NWU1NTE0MmIwOWYyM2U1YWSkG+Xz: 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.621 09:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.621 request: 00:19:27.621 { 00:19:27.621 "name": "nvme0", 00:19:27.621 "dhchap_key": "key2", 00:19:27.621 "dhchap_ctrlr_key": "ckey1", 00:19:27.621 "method": "bdev_nvme_set_keys", 00:19:27.621 "req_id": 1 00:19:27.621 } 00:19:27.621 Got JSON-RPC error response 00:19:27.621 response: 00:19:27.621 { 00:19:27.621 "code": -13, 00:19:27.621 "message": "Permission denied" 00:19:27.621 } 00:19:27.621 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:27.621 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:27.621 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.621 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.621 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.621 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.621 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:19:27.621 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.621 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.621 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.621 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:19:27.621 09:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:28.558 rmmod nvme_tcp 00:19:28.558 rmmod nvme_fabrics 00:19:28.558 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78461 ']' 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78461 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78461 ']' 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78461 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78461 00:19:28.816 killing process with pid 78461 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78461' 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78461 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78461 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:28.816 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:29.074 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:29.332 09:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:29.899 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:29.899 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:30.157 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:30.157 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.KfW /tmp/spdk.key-null.bmg /tmp/spdk.key-sha256.QQb /tmp/spdk.key-sha384.oXe /tmp/spdk.key-sha512.t6f /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:30.157 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:30.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:30.416 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:30.416 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:30.416 00:19:30.416 real 0m37.846s 00:19:30.416 user 0m34.023s 00:19:30.416 sys 0m3.879s 00:19:30.416 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.416 09:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.416 ************************************ 00:19:30.416 END TEST nvmf_auth_host 00:19:30.416 ************************************ 00:19:30.674 09:47:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:19:30.674 09:47:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:30.674 09:47:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.675 ************************************ 00:19:30.675 START TEST nvmf_digest 00:19:30.675 ************************************ 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:30.675 * Looking for test storage... 00:19:30.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:30.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.675 --rc genhtml_branch_coverage=1 00:19:30.675 --rc genhtml_function_coverage=1 00:19:30.675 --rc genhtml_legend=1 00:19:30.675 --rc geninfo_all_blocks=1 00:19:30.675 --rc geninfo_unexecuted_blocks=1 00:19:30.675 00:19:30.675 ' 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:30.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.675 --rc genhtml_branch_coverage=1 00:19:30.675 --rc genhtml_function_coverage=1 00:19:30.675 --rc genhtml_legend=1 00:19:30.675 --rc geninfo_all_blocks=1 00:19:30.675 --rc geninfo_unexecuted_blocks=1 00:19:30.675 00:19:30.675 ' 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:30.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.675 --rc genhtml_branch_coverage=1 00:19:30.675 --rc genhtml_function_coverage=1 00:19:30.675 --rc genhtml_legend=1 00:19:30.675 --rc geninfo_all_blocks=1 00:19:30.675 --rc geninfo_unexecuted_blocks=1 00:19:30.675 00:19:30.675 ' 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:30.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.675 --rc genhtml_branch_coverage=1 00:19:30.675 --rc genhtml_function_coverage=1 00:19:30.675 --rc genhtml_legend=1 00:19:30.675 --rc geninfo_all_blocks=1 00:19:30.675 --rc geninfo_unexecuted_blocks=1 00:19:30.675 00:19:30.675 ' 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:19:30.675 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:30.676 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:30.676 Cannot find device "nvmf_init_br" 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:19:30.676 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:30.935 Cannot find device "nvmf_init_br2" 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:30.935 Cannot find device "nvmf_tgt_br" 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.935 Cannot find device "nvmf_tgt_br2" 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:30.935 Cannot find device "nvmf_init_br" 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:30.935 Cannot find device "nvmf_init_br2" 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:30.935 Cannot find device "nvmf_tgt_br" 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:30.935 Cannot find device "nvmf_tgt_br2" 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:30.935 Cannot find device "nvmf_br" 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:30.935 Cannot find device "nvmf_init_if" 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:30.935 Cannot find device "nvmf_init_if2" 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:30.935 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:31.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:31.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:19:31.195 00:19:31.195 --- 10.0.0.3 ping statistics --- 00:19:31.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.195 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:31.195 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:31.195 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:19:31.195 00:19:31.195 --- 10.0.0.4 ping statistics --- 00:19:31.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.195 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:31.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:31.195 00:19:31.195 --- 10.0.0.1 ping statistics --- 00:19:31.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.195 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:31.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:19:31.195 00:19:31.195 --- 10.0.0.2 ping statistics --- 00:19:31.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.195 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:31.195 ************************************ 00:19:31.195 START TEST nvmf_digest_clean 00:19:31.195 ************************************ 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80113 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80113 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80113 ']' 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.195 09:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:31.195 [2024-11-19 09:47:18.762128] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:19:31.196 [2024-11-19 09:47:18.762243] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.455 [2024-11-19 09:47:18.909139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.455 [2024-11-19 09:47:18.981460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.455 [2024-11-19 09:47:18.981540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.455 [2024-11-19 09:47:18.981562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.455 [2024-11-19 09:47:18.981576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.455 [2024-11-19 09:47:18.981588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.455 [2024-11-19 09:47:18.982049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.455 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.455 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:31.455 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.455 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.455 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:31.455 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.455 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:31.455 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:31.455 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:31.455 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.455 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:31.714 [2024-11-19 09:47:19.118539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:31.714 null0 00:19:31.714 [2024-11-19 09:47:19.171979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.714 [2024-11-19 09:47:19.196113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80138 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80138 /var/tmp/bperf.sock 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80138 ']' 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:31.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.714 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:31.715 [2024-11-19 09:47:19.259646] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:19:31.715 [2024-11-19 09:47:19.259763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80138 ] 00:19:31.973 [2024-11-19 09:47:19.413117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.973 [2024-11-19 09:47:19.481871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.973 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.973 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:31.973 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:31.973 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:31.973 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:32.232 [2024-11-19 09:47:19.813626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.490 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:32.491 09:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:32.749 nvme0n1 00:19:32.749 09:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:32.749 09:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:32.749 Running I/O for 2 seconds... 00:19:35.059 14732.00 IOPS, 57.55 MiB/s [2024-11-19T09:47:22.682Z] 14732.00 IOPS, 57.55 MiB/s 00:19:35.059 Latency(us) 00:19:35.059 [2024-11-19T09:47:22.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.059 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:35.059 nvme0n1 : 2.01 14756.18 57.64 0.00 0.00 8667.80 7357.91 25380.31 00:19:35.059 [2024-11-19T09:47:22.682Z] =================================================================================================================== 00:19:35.059 [2024-11-19T09:47:22.682Z] Total : 14756.18 57.64 0.00 0.00 8667.80 7357.91 25380.31 00:19:35.059 { 00:19:35.059 "results": [ 00:19:35.059 { 00:19:35.059 "job": "nvme0n1", 00:19:35.059 "core_mask": "0x2", 00:19:35.059 "workload": "randread", 00:19:35.059 "status": "finished", 00:19:35.059 "queue_depth": 128, 00:19:35.059 "io_size": 4096, 00:19:35.059 "runtime": 2.005397, 00:19:35.059 "iops": 14756.1804470636, 00:19:35.059 "mibps": 57.64132987134219, 00:19:35.059 "io_failed": 0, 00:19:35.059 "io_timeout": 0, 00:19:35.059 "avg_latency_us": 8667.802509769226, 00:19:35.059 "min_latency_us": 7357.905454545455, 00:19:35.059 "max_latency_us": 25380.305454545454 00:19:35.059 } 00:19:35.059 ], 00:19:35.059 "core_count": 1 00:19:35.059 } 00:19:35.059 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:35.059 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:35.059 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:35.059 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:35.059 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:35.059 | select(.opcode=="crc32c") 00:19:35.059 | "\(.module_name) \(.executed)"' 00:19:35.059 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:35.059 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:35.059 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:35.059 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:35.059 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80138 00:19:35.059 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80138 ']' 00:19:35.059 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80138 00:19:35.059 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80138 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:35.318 killing process with pid 80138 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80138' 00:19:35.318 Received shutdown signal, test time was about 2.000000 seconds 00:19:35.318 00:19:35.318 Latency(us) 00:19:35.318 [2024-11-19T09:47:22.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.318 [2024-11-19T09:47:22.941Z] =================================================================================================================== 00:19:35.318 [2024-11-19T09:47:22.941Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80138 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80138 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80191 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80191 /var/tmp/bperf.sock 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80191 ']' 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.318 09:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:35.576 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:35.576 Zero copy mechanism will not be used. 00:19:35.576 [2024-11-19 09:47:22.958734] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:19:35.576 [2024-11-19 09:47:22.958826] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80191 ] 00:19:35.576 [2024-11-19 09:47:23.102847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.576 [2024-11-19 09:47:23.165093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.835 09:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.835 09:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:35.835 09:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:35.835 09:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:35.835 09:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:36.094 [2024-11-19 09:47:23.516851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:36.094 09:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:36.094 09:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:36.353 nvme0n1 00:19:36.353 09:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:36.353 09:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:36.611 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:36.611 Zero copy mechanism will not be used. 00:19:36.611 Running I/O for 2 seconds... 00:19:38.482 7424.00 IOPS, 928.00 MiB/s [2024-11-19T09:47:26.105Z] 7496.00 IOPS, 937.00 MiB/s 00:19:38.482 Latency(us) 00:19:38.482 [2024-11-19T09:47:26.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.482 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:38.482 nvme0n1 : 2.00 7491.90 936.49 0.00 0.00 2132.20 1832.03 7626.01 00:19:38.482 [2024-11-19T09:47:26.105Z] =================================================================================================================== 00:19:38.482 [2024-11-19T09:47:26.105Z] Total : 7491.90 936.49 0.00 0.00 2132.20 1832.03 7626.01 00:19:38.482 { 00:19:38.482 "results": [ 00:19:38.482 { 00:19:38.482 "job": "nvme0n1", 00:19:38.482 "core_mask": "0x2", 00:19:38.482 "workload": "randread", 00:19:38.482 "status": "finished", 00:19:38.482 "queue_depth": 16, 00:19:38.482 "io_size": 131072, 00:19:38.482 "runtime": 2.003229, 00:19:38.482 "iops": 7491.904320474594, 00:19:38.482 "mibps": 936.4880400593242, 00:19:38.482 "io_failed": 0, 00:19:38.482 "io_timeout": 0, 00:19:38.482 "avg_latency_us": 2132.202116689281, 00:19:38.482 "min_latency_us": 1832.0290909090909, 00:19:38.482 "max_latency_us": 7626.007272727273 00:19:38.482 } 00:19:38.482 ], 00:19:38.482 "core_count": 1 00:19:38.482 } 00:19:38.482 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:38.482 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:38.482 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:38.482 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:38.482 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:38.482 | select(.opcode=="crc32c") 00:19:38.482 | "\(.module_name) \(.executed)"' 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80191 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80191 ']' 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80191 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80191 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:39.050 killing process with pid 80191 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80191' 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80191 00:19:39.050 Received shutdown signal, test time was about 2.000000 seconds 00:19:39.050 00:19:39.050 Latency(us) 00:19:39.050 [2024-11-19T09:47:26.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.050 [2024-11-19T09:47:26.673Z] =================================================================================================================== 00:19:39.050 [2024-11-19T09:47:26.673Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80191 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80242 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80242 /var/tmp/bperf.sock 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80242 ']' 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.050 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:39.309 [2024-11-19 09:47:26.689660] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:19:39.309 [2024-11-19 09:47:26.689755] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80242 ] 00:19:39.309 [2024-11-19 09:47:26.832812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.309 [2024-11-19 09:47:26.890337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.567 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.567 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:39.567 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:39.567 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:39.567 09:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:39.825 [2024-11-19 09:47:27.288311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:39.825 09:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:39.825 09:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:40.084 nvme0n1 00:19:40.084 09:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:40.084 09:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:40.343 Running I/O for 2 seconds... 00:19:42.213 15876.00 IOPS, 62.02 MiB/s [2024-11-19T09:47:29.836Z] 15685.00 IOPS, 61.27 MiB/s 00:19:42.213 Latency(us) 00:19:42.213 [2024-11-19T09:47:29.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.213 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:42.213 nvme0n1 : 2.01 15706.50 61.35 0.00 0.00 8142.50 7357.91 16205.27 00:19:42.213 [2024-11-19T09:47:29.836Z] =================================================================================================================== 00:19:42.213 [2024-11-19T09:47:29.837Z] Total : 15706.50 61.35 0.00 0.00 8142.50 7357.91 16205.27 00:19:42.214 { 00:19:42.214 "results": [ 00:19:42.214 { 00:19:42.214 "job": "nvme0n1", 00:19:42.214 "core_mask": "0x2", 00:19:42.214 "workload": "randwrite", 00:19:42.214 "status": "finished", 00:19:42.214 "queue_depth": 128, 00:19:42.214 "io_size": 4096, 00:19:42.214 "runtime": 2.005412, 00:19:42.214 "iops": 15706.498215827969, 00:19:42.214 "mibps": 61.353508655578004, 00:19:42.214 "io_failed": 0, 00:19:42.214 "io_timeout": 0, 00:19:42.214 "avg_latency_us": 8142.504410669654, 00:19:42.214 "min_latency_us": 7357.905454545455, 00:19:42.214 "max_latency_us": 16205.265454545455 00:19:42.214 } 00:19:42.214 ], 00:19:42.214 "core_count": 1 00:19:42.214 } 00:19:42.214 09:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:42.214 09:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:42.214 09:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:42.214 09:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:42.214 | select(.opcode=="crc32c") 00:19:42.214 | "\(.module_name) \(.executed)"' 00:19:42.214 09:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80242 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80242 ']' 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80242 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80242 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:42.781 killing process with pid 80242 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80242' 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80242 00:19:42.781 Received shutdown signal, test time was about 2.000000 seconds 00:19:42.781 00:19:42.781 Latency(us) 00:19:42.781 [2024-11-19T09:47:30.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.781 [2024-11-19T09:47:30.404Z] =================================================================================================================== 00:19:42.781 [2024-11-19T09:47:30.404Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80242 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80296 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80296 /var/tmp/bperf.sock 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80296 ']' 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.781 09:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:43.040 [2024-11-19 09:47:30.440151] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:19:43.040 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:43.040 Zero copy mechanism will not be used. 00:19:43.040 [2024-11-19 09:47:30.440252] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80296 ] 00:19:43.040 [2024-11-19 09:47:30.583792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.040 [2024-11-19 09:47:30.649225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.974 09:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.974 09:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:43.974 09:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:43.974 09:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:43.974 09:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:44.232 [2024-11-19 09:47:31.796894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:44.232 09:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:44.232 09:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:44.800 nvme0n1 00:19:44.800 09:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:44.800 09:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:44.800 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:44.800 Zero copy mechanism will not be used. 00:19:44.800 Running I/O for 2 seconds... 00:19:46.741 5947.00 IOPS, 743.38 MiB/s [2024-11-19T09:47:34.624Z] 5752.50 IOPS, 719.06 MiB/s 00:19:47.001 Latency(us) 00:19:47.001 [2024-11-19T09:47:34.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.001 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:47.001 nvme0n1 : 2.00 5748.73 718.59 0.00 0.00 2776.04 2085.24 7149.38 00:19:47.001 [2024-11-19T09:47:34.624Z] =================================================================================================================== 00:19:47.001 [2024-11-19T09:47:34.624Z] Total : 5748.73 718.59 0.00 0.00 2776.04 2085.24 7149.38 00:19:47.001 { 00:19:47.001 "results": [ 00:19:47.001 { 00:19:47.001 "job": "nvme0n1", 00:19:47.001 "core_mask": "0x2", 00:19:47.001 "workload": "randwrite", 00:19:47.001 "status": "finished", 00:19:47.001 "queue_depth": 16, 00:19:47.001 "io_size": 131072, 00:19:47.001 "runtime": 2.004096, 00:19:47.001 "iops": 5748.726607907007, 00:19:47.001 "mibps": 718.5908259883759, 00:19:47.001 "io_failed": 0, 00:19:47.001 "io_timeout": 0, 00:19:47.001 "avg_latency_us": 2776.04217563185, 00:19:47.001 "min_latency_us": 2085.2363636363634, 00:19:47.001 "max_latency_us": 7149.381818181818 00:19:47.001 } 00:19:47.001 ], 00:19:47.001 "core_count": 1 00:19:47.001 } 00:19:47.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:47.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:47.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:47.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:47.001 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:47.001 | select(.opcode=="crc32c") 00:19:47.001 | "\(.module_name) \(.executed)"' 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80296 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80296 ']' 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80296 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80296 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:47.260 killing process with pid 80296 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80296' 00:19:47.260 Received shutdown signal, test time was about 2.000000 seconds 00:19:47.260 00:19:47.260 Latency(us) 00:19:47.260 [2024-11-19T09:47:34.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.260 [2024-11-19T09:47:34.883Z] =================================================================================================================== 00:19:47.260 [2024-11-19T09:47:34.883Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80296 00:19:47.260 09:47:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80296 00:19:47.541 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80113 00:19:47.541 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80113 ']' 00:19:47.541 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80113 00:19:47.541 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:47.541 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.541 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80113 00:19:47.541 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:47.541 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:47.541 killing process with pid 80113 00:19:47.541 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80113' 00:19:47.541 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80113 00:19:47.541 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80113 00:19:47.800 00:19:47.800 real 0m16.569s 00:19:47.800 user 0m32.374s 00:19:47.800 sys 0m4.902s 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:47.800 ************************************ 00:19:47.800 END TEST nvmf_digest_clean 00:19:47.800 ************************************ 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:47.800 ************************************ 00:19:47.800 START TEST nvmf_digest_error 00:19:47.800 ************************************ 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80386 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80386 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80386 ']' 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.800 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:47.800 [2024-11-19 09:47:35.388932] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:19:47.800 [2024-11-19 09:47:35.389034] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.059 [2024-11-19 09:47:35.533392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.059 [2024-11-19 09:47:35.595355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.059 [2024-11-19 09:47:35.595413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.059 [2024-11-19 09:47:35.595424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.059 [2024-11-19 09:47:35.595433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.059 [2024-11-19 09:47:35.595441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.059 [2024-11-19 09:47:35.595854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.059 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.059 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:48.059 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.059 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.059 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:48.317 [2024-11-19 09:47:35.708292] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:48.317 [2024-11-19 09:47:35.776543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:48.317 null0 00:19:48.317 [2024-11-19 09:47:35.831329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.317 [2024-11-19 09:47:35.855467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80405 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80405 /var/tmp/bperf.sock 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80405 ']' 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.317 09:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:48.317 [2024-11-19 09:47:35.924498] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:19:48.317 [2024-11-19 09:47:35.924638] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80405 ] 00:19:48.576 [2024-11-19 09:47:36.070860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.576 [2024-11-19 09:47:36.151086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.834 [2024-11-19 09:47:36.231369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:49.402 09:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.402 09:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:49.402 09:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:49.402 09:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:49.661 09:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:49.661 09:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.661 09:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:49.661 09:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.661 09:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:49.661 09:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:50.228 nvme0n1 00:19:50.228 09:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:50.229 09:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.229 09:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:50.229 09:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.229 09:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:50.229 09:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:50.229 Running I/O for 2 seconds... 00:19:50.229 [2024-11-19 09:47:37.768408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.229 [2024-11-19 09:47:37.768503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.229 [2024-11-19 09:47:37.768520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.229 [2024-11-19 09:47:37.785826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.229 [2024-11-19 09:47:37.785916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.229 [2024-11-19 09:47:37.785946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.229 [2024-11-19 09:47:37.803034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.229 [2024-11-19 09:47:37.803099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.229 [2024-11-19 09:47:37.803115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.229 [2024-11-19 09:47:37.820263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.229 [2024-11-19 09:47:37.820313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.229 [2024-11-19 09:47:37.820328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.229 [2024-11-19 09:47:37.837481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.229 [2024-11-19 09:47:37.837538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.229 [2024-11-19 09:47:37.837570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:37.854759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:37.854815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:37.854830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:37.872152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:37.872383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:37.872409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:37.889255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:37.889295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:37.889324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:37.905774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:37.905816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:37.905845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:37.922773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:37.922817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:37.922854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:37.939385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:37.939560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:37.939578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:37.956642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:37.956702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:37.956732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:37.974172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:37.974374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:37.974393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:37.992178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:37.992233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:37.992250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:38.009995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:38.010153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:38.010172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:38.027704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:38.027857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:38.027875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:38.045102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:38.045144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:38.045159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:38.062651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:38.062706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:38.062736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:38.079409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:38.079452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:38.079466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.488 [2024-11-19 09:47:38.096617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.488 [2024-11-19 09:47:38.096658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.488 [2024-11-19 09:47:38.096688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.114306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.114345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.747 [2024-11-19 09:47:38.114374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.131496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.131696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.747 [2024-11-19 09:47:38.131716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.149180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.149250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.747 [2024-11-19 09:47:38.149266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.166484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.166527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.747 [2024-11-19 09:47:38.166541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.183564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.183769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.747 [2024-11-19 09:47:38.183788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.200938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.200977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.747 [2024-11-19 09:47:38.201007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.218385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.218425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.747 [2024-11-19 09:47:38.218440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.235849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.236049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.747 [2024-11-19 09:47:38.236067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.252999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.253039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.747 [2024-11-19 09:47:38.253068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.270099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.270139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.747 [2024-11-19 09:47:38.270167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.287457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.287649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.747 [2024-11-19 09:47:38.287668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.304693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.304755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.747 [2024-11-19 09:47:38.304786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.322382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.322421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.747 [2024-11-19 09:47:38.322450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.339083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.339122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.747 [2024-11-19 09:47:38.339152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.747 [2024-11-19 09:47:38.356287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:50.747 [2024-11-19 09:47:38.356331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.748 [2024-11-19 09:47:38.356359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.373647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.373688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.373718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.390254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.390320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.390347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.407685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.407872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.407892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.425330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.425538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.425556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.442836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.442892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.442921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.459654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.459694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.459723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.476531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.476587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.476618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.493393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.493446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.493476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.509946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.510132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.510166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.526559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.526609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.526638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.544039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.544342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.544362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.561291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.561358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.561389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.578727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.578776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.578791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.595945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.596116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.596135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.006 [2024-11-19 09:47:38.613349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.006 [2024-11-19 09:47:38.613391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.006 [2024-11-19 09:47:38.613422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.265 [2024-11-19 09:47:38.630655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.265 [2024-11-19 09:47:38.630691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.265 [2024-11-19 09:47:38.630705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.265 [2024-11-19 09:47:38.647820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.265 [2024-11-19 09:47:38.647861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.265 [2024-11-19 09:47:38.647875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.265 [2024-11-19 09:47:38.664932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.265 [2024-11-19 09:47:38.664986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.265 [2024-11-19 09:47:38.665000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.265 [2024-11-19 09:47:38.682261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.265 [2024-11-19 09:47:38.682307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.265 [2024-11-19 09:47:38.682322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.265 [2024-11-19 09:47:38.699845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.265 [2024-11-19 09:47:38.699910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.265 [2024-11-19 09:47:38.699925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.265 [2024-11-19 09:47:38.717050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.265 [2024-11-19 09:47:38.717093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.265 [2024-11-19 09:47:38.717107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.265 [2024-11-19 09:47:38.734103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.265 [2024-11-19 09:47:38.734143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.265 [2024-11-19 09:47:38.734158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.265 14548.00 IOPS, 56.83 MiB/s [2024-11-19T09:47:38.888Z] [2024-11-19 09:47:38.751256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.265 [2024-11-19 09:47:38.751433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.265 [2024-11-19 09:47:38.751455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.265 [2024-11-19 09:47:38.768599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.265 [2024-11-19 09:47:38.768652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.265 [2024-11-19 09:47:38.768667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.265 [2024-11-19 09:47:38.785842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.265 [2024-11-19 09:47:38.785884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.265 [2024-11-19 09:47:38.785914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.265 [2024-11-19 09:47:38.803122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.265 [2024-11-19 09:47:38.803169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.265 [2024-11-19 09:47:38.803222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.265 [2024-11-19 09:47:38.820399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.265 [2024-11-19 09:47:38.820438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.266 [2024-11-19 09:47:38.820468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.266 [2024-11-19 09:47:38.837793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.266 [2024-11-19 09:47:38.837976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.266 [2024-11-19 09:47:38.837994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.266 [2024-11-19 09:47:38.862782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.266 [2024-11-19 09:47:38.862825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.266 [2024-11-19 09:47:38.862856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.266 [2024-11-19 09:47:38.880110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.266 [2024-11-19 09:47:38.880151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.266 [2024-11-19 09:47:38.880180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.524 [2024-11-19 09:47:38.897351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.524 [2024-11-19 09:47:38.897393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.524 [2024-11-19 09:47:38.897423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.524 [2024-11-19 09:47:38.914471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.524 [2024-11-19 09:47:38.914509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.524 [2024-11-19 09:47:38.914539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.525 [2024-11-19 09:47:38.931536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.525 [2024-11-19 09:47:38.931703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.525 [2024-11-19 09:47:38.931721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.525 [2024-11-19 09:47:38.948935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.525 [2024-11-19 09:47:38.948986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.525 [2024-11-19 09:47:38.949002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.525 [2024-11-19 09:47:38.966105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.525 [2024-11-19 09:47:38.966151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.525 [2024-11-19 09:47:38.966166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.525 [2024-11-19 09:47:38.983259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.525 [2024-11-19 09:47:38.983301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.525 [2024-11-19 09:47:38.983316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.525 [2024-11-19 09:47:39.000432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.525 [2024-11-19 09:47:39.000479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.525 [2024-11-19 09:47:39.000494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.525 [2024-11-19 09:47:39.017555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.525 [2024-11-19 09:47:39.017722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.525 [2024-11-19 09:47:39.017740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.525 [2024-11-19 09:47:39.034871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.525 [2024-11-19 09:47:39.034917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.525 [2024-11-19 09:47:39.034932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.525 [2024-11-19 09:47:39.052026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.525 [2024-11-19 09:47:39.052073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.525 [2024-11-19 09:47:39.052088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.525 [2024-11-19 09:47:39.069137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.525 [2024-11-19 09:47:39.069181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.525 [2024-11-19 09:47:39.069196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.525 [2024-11-19 09:47:39.086355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.525 [2024-11-19 09:47:39.086527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.525 [2024-11-19 09:47:39.086546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.525 [2024-11-19 09:47:39.103557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.525 [2024-11-19 09:47:39.103598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.525 [2024-11-19 09:47:39.103613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.525 [2024-11-19 09:47:39.120474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.525 [2024-11-19 09:47:39.120647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.525 [2024-11-19 09:47:39.120665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.525 [2024-11-19 09:47:39.137782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.525 [2024-11-19 09:47:39.137824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.525 [2024-11-19 09:47:39.137854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.154975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.155015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.155044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.172165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.172204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.172249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.189147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.189185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.189214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.206435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.206499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.206514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.223680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.223721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.223736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.240966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.241005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.241035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.258348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.258390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.258404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.275579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.275770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.275788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.292891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.292931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.292960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.310336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.310378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.310393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.327468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.327511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.327526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.344562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.344607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.344622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.361784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.361825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.361839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.379161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.379237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.379253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.783 [2024-11-19 09:47:39.396220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:51.783 [2024-11-19 09:47:39.396260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.783 [2024-11-19 09:47:39.396274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.041 [2024-11-19 09:47:39.413436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.041 [2024-11-19 09:47:39.413474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.041 [2024-11-19 09:47:39.413503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.041 [2024-11-19 09:47:39.430519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.041 [2024-11-19 09:47:39.430558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.041 [2024-11-19 09:47:39.430572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.041 [2024-11-19 09:47:39.447873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.041 [2024-11-19 09:47:39.448047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.041 [2024-11-19 09:47:39.448065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.041 [2024-11-19 09:47:39.465451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.041 [2024-11-19 09:47:39.465612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.041 [2024-11-19 09:47:39.465632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.041 [2024-11-19 09:47:39.482975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.041 [2024-11-19 09:47:39.483019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.041 [2024-11-19 09:47:39.483048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.041 [2024-11-19 09:47:39.500160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.041 [2024-11-19 09:47:39.500201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.041 [2024-11-19 09:47:39.500231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.041 [2024-11-19 09:47:39.517311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.041 [2024-11-19 09:47:39.517373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.041 [2024-11-19 09:47:39.517388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.041 [2024-11-19 09:47:39.534382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.041 [2024-11-19 09:47:39.534428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.041 [2024-11-19 09:47:39.534457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.041 [2024-11-19 09:47:39.551511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.041 [2024-11-19 09:47:39.551556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.041 [2024-11-19 09:47:39.551570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.042 [2024-11-19 09:47:39.568470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.042 [2024-11-19 09:47:39.568629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.042 [2024-11-19 09:47:39.568647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.042 [2024-11-19 09:47:39.585610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.042 [2024-11-19 09:47:39.585651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.042 [2024-11-19 09:47:39.585680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.042 [2024-11-19 09:47:39.602463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.042 [2024-11-19 09:47:39.602500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.042 [2024-11-19 09:47:39.602529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.042 [2024-11-19 09:47:39.619581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.042 [2024-11-19 09:47:39.619743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.042 [2024-11-19 09:47:39.619761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.042 [2024-11-19 09:47:39.636921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.042 [2024-11-19 09:47:39.636965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.042 [2024-11-19 09:47:39.636979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.042 [2024-11-19 09:47:39.654027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.042 [2024-11-19 09:47:39.654071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.042 [2024-11-19 09:47:39.654086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.300 [2024-11-19 09:47:39.671220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.300 [2024-11-19 09:47:39.671268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-11-19 09:47:39.671283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.300 [2024-11-19 09:47:39.688038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.300 [2024-11-19 09:47:39.688077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-11-19 09:47:39.688107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.300 [2024-11-19 09:47:39.704882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.300 [2024-11-19 09:47:39.704922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-11-19 09:47:39.704941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.300 [2024-11-19 09:47:39.721737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.300 [2024-11-19 09:47:39.721776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-11-19 09:47:39.721805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.300 [2024-11-19 09:47:39.740136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb2c0) 00:19:52.300 [2024-11-19 09:47:39.740177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.300 [2024-11-19 09:47:39.740191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.300 14611.50 IOPS, 57.08 MiB/s 00:19:52.300 Latency(us) 00:19:52.300 [2024-11-19T09:47:39.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.300 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:52.300 nvme0n1 : 2.01 14648.43 57.22 0.00 0.00 8731.36 7983.48 33363.78 00:19:52.300 [2024-11-19T09:47:39.923Z] =================================================================================================================== 00:19:52.300 [2024-11-19T09:47:39.923Z] Total : 14648.43 57.22 0.00 0.00 8731.36 7983.48 33363.78 00:19:52.300 { 00:19:52.300 "results": [ 00:19:52.300 { 00:19:52.300 "job": "nvme0n1", 00:19:52.300 "core_mask": "0x2", 00:19:52.300 "workload": "randread", 00:19:52.300 "status": "finished", 00:19:52.300 "queue_depth": 128, 00:19:52.300 "io_size": 4096, 00:19:52.300 "runtime": 2.012297, 00:19:52.300 "iops": 14648.434102918207, 00:19:52.300 "mibps": 57.220445714524246, 00:19:52.300 "io_failed": 0, 00:19:52.300 "io_timeout": 0, 00:19:52.300 "avg_latency_us": 8731.360712049764, 00:19:52.300 "min_latency_us": 7983.476363636363, 00:19:52.300 "max_latency_us": 33363.781818181815 00:19:52.300 } 00:19:52.300 ], 00:19:52.300 "core_count": 1 00:19:52.300 } 00:19:52.300 09:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:52.300 09:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:52.300 09:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:52.300 | .driver_specific 00:19:52.300 | .nvme_error 00:19:52.300 | .status_code 00:19:52.300 | .command_transient_transport_error' 00:19:52.300 09:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:52.558 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 115 > 0 )) 00:19:52.558 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80405 00:19:52.559 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80405 ']' 00:19:52.559 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80405 00:19:52.559 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:52.559 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.559 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80405 00:19:52.559 killing process with pid 80405 00:19:52.559 Received shutdown signal, test time was about 2.000000 seconds 00:19:52.559 00:19:52.559 Latency(us) 00:19:52.559 [2024-11-19T09:47:40.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.559 [2024-11-19T09:47:40.182Z] =================================================================================================================== 00:19:52.559 [2024-11-19T09:47:40.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.559 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:52.559 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:52.559 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80405' 00:19:52.559 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80405 00:19:52.559 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80405 00:19:52.816 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:52.816 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:52.816 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:52.816 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:52.816 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:52.816 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80471 00:19:52.816 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:52.816 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80471 /var/tmp/bperf.sock 00:19:52.816 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80471 ']' 00:19:52.816 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:52.816 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.816 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:52.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:52.816 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.816 09:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:52.816 [2024-11-19 09:47:40.376213] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:19:52.816 [2024-11-19 09:47:40.376494] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80471 ] 00:19:52.816 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:52.816 Zero copy mechanism will not be used. 00:19:53.073 [2024-11-19 09:47:40.521746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.073 [2024-11-19 09:47:40.583238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.074 [2024-11-19 09:47:40.639401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:54.006 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.006 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:54.006 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:54.006 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:54.265 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:54.265 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.265 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:54.265 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.265 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:54.265 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:54.524 nvme0n1 00:19:54.524 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:54.524 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.524 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:54.524 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.524 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:54.524 09:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:54.524 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:54.524 Zero copy mechanism will not be used. 00:19:54.524 Running I/O for 2 seconds... 00:19:54.524 [2024-11-19 09:47:42.144264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.524 [2024-11-19 09:47:42.144324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.524 [2024-11-19 09:47:42.144341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:54.785 [2024-11-19 09:47:42.148643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.785 [2024-11-19 09:47:42.148686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.785 [2024-11-19 09:47:42.148701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.785 [2024-11-19 09:47:42.152979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.785 [2024-11-19 09:47:42.153021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.785 [2024-11-19 09:47:42.153035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:54.785 [2024-11-19 09:47:42.157323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.785 [2024-11-19 09:47:42.157364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.785 [2024-11-19 09:47:42.157379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:54.785 [2024-11-19 09:47:42.161649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.785 [2024-11-19 09:47:42.161690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.785 [2024-11-19 09:47:42.161705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:54.785 [2024-11-19 09:47:42.165978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.785 [2024-11-19 09:47:42.166020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.785 [2024-11-19 09:47:42.166034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.785 [2024-11-19 09:47:42.170325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.785 [2024-11-19 09:47:42.170365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.785 [2024-11-19 09:47:42.170380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:54.785 [2024-11-19 09:47:42.174717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.785 [2024-11-19 09:47:42.174757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.785 [2024-11-19 09:47:42.174772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:54.785 [2024-11-19 09:47:42.179048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.785 [2024-11-19 09:47:42.179090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.785 [2024-11-19 09:47:42.179105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:54.785 [2024-11-19 09:47:42.183347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.785 [2024-11-19 09:47:42.183386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.785 [2024-11-19 09:47:42.183400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.785 [2024-11-19 09:47:42.187700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.187740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.187754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.192049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.192089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.192103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.196428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.196467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.196482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.200769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.200810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.200825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.205141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.205182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.205197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.209436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.209476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.209490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.213792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.213832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.213847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.218204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.218273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.218289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.222483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.222524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.222538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.226779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.226820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.226834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.231168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.231228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.231244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.235535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.235576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.235590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.240729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.240773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.240787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.249696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.249875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.249895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.256564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.256736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.256754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.262752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.262800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.262815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.268581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.268621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.268635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.274348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.274387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.274401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.280176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.280359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.280377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.286079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.286121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.286135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.291841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.291997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.292016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.298859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.299014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.299032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.304651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.304690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.304705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.311030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.311070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.311085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.316863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.316902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.316917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.323037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.323078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.323092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.328874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.786 [2024-11-19 09:47:42.328913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.786 [2024-11-19 09:47:42.328943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:54.786 [2024-11-19 09:47:42.334702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.787 [2024-11-19 09:47:42.334741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.787 [2024-11-19 09:47:42.334756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.787 [2024-11-19 09:47:42.340529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.787 [2024-11-19 09:47:42.340568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.787 [2024-11-19 09:47:42.340599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:54.787 [2024-11-19 09:47:42.346240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.787 [2024-11-19 09:47:42.346280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.787 [2024-11-19 09:47:42.346294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:54.787 [2024-11-19 09:47:42.352017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.787 [2024-11-19 09:47:42.352058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.787 [2024-11-19 09:47:42.352072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:54.787 [2024-11-19 09:47:42.357739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.787 [2024-11-19 09:47:42.357916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.787 [2024-11-19 09:47:42.357945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.787 [2024-11-19 09:47:42.363580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.787 [2024-11-19 09:47:42.363620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.787 [2024-11-19 09:47:42.363634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:54.787 [2024-11-19 09:47:42.368831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.787 [2024-11-19 09:47:42.369001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.787 [2024-11-19 09:47:42.369019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:54.787 [2024-11-19 09:47:42.373978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.787 [2024-11-19 09:47:42.374019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.787 [2024-11-19 09:47:42.374034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:54.787 [2024-11-19 09:47:42.379128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.787 [2024-11-19 09:47:42.379167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.787 [2024-11-19 09:47:42.379221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.787 [2024-11-19 09:47:42.384262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.787 [2024-11-19 09:47:42.384333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.787 [2024-11-19 09:47:42.384348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:54.787 [2024-11-19 09:47:42.389391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.787 [2024-11-19 09:47:42.389430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.787 [2024-11-19 09:47:42.389444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:54.787 [2024-11-19 09:47:42.394400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.787 [2024-11-19 09:47:42.394439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.787 [2024-11-19 09:47:42.394469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:54.787 [2024-11-19 09:47:42.399357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.787 [2024-11-19 09:47:42.399395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.787 [2024-11-19 09:47:42.399415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.787 [2024-11-19 09:47:42.404460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:54.787 [2024-11-19 09:47:42.404499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.787 [2024-11-19 09:47:42.404513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.048 [2024-11-19 09:47:42.409711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.048 [2024-11-19 09:47:42.409751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.048 [2024-11-19 09:47:42.409781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.048 [2024-11-19 09:47:42.414994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.048 [2024-11-19 09:47:42.415165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.048 [2024-11-19 09:47:42.415244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.048 [2024-11-19 09:47:42.420472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.048 [2024-11-19 09:47:42.420641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.048 [2024-11-19 09:47:42.420780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.048 [2024-11-19 09:47:42.425724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.048 [2024-11-19 09:47:42.425913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.048 [2024-11-19 09:47:42.426043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.048 [2024-11-19 09:47:42.431042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.048 [2024-11-19 09:47:42.431261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.048 [2024-11-19 09:47:42.431392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.048 [2024-11-19 09:47:42.436434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.048 [2024-11-19 09:47:42.436625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.048 [2024-11-19 09:47:42.436806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.048 [2024-11-19 09:47:42.441966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.048 [2024-11-19 09:47:42.442163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.048 [2024-11-19 09:47:42.442324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.048 [2024-11-19 09:47:42.448891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.048 [2024-11-19 09:47:42.449106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.048 [2024-11-19 09:47:42.449307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.048 [2024-11-19 09:47:42.454289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.048 [2024-11-19 09:47:42.454504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.048 [2024-11-19 09:47:42.454632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.048 [2024-11-19 09:47:42.460527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.048 [2024-11-19 09:47:42.460720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.048 [2024-11-19 09:47:42.460907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.048 [2024-11-19 09:47:42.469291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.048 [2024-11-19 09:47:42.469465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.048 [2024-11-19 09:47:42.469599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.048 [2024-11-19 09:47:42.474932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.048 [2024-11-19 09:47:42.475103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.475315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.480417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.480593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.480728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.485517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.485559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.485573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.490935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.490976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.490991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.495755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.495910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.495927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.501377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.501422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.501437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.506323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.506481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.506498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.511593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.511634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.511648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.516428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.516471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.516485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.521184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.521253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.521269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.525992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.526176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.526194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.531008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.531049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.531079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.535866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.535906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.535936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.540610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.540650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.540664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.545292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.545335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.545365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.550097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.550140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.550155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.554765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.554804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.554819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.559491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.559532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.559546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.564170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.564243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.564260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.568909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.569069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.569089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.573812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.573853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.573867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.578673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.578716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.578731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.583409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.583572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.583591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.588228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.588266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.588280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.593011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.593053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.049 [2024-11-19 09:47:42.593068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.049 [2024-11-19 09:47:42.597772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.049 [2024-11-19 09:47:42.597813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.597827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.602419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.602577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.602595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.607321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.607362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.607377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.611975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.612015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.612046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.616727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.616765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.616795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.621504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.621671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.621689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.626487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.626527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.626542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.631267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.631307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.631321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.635917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.635954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.635983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.640736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.640915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.640932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.645578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.645616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.645645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.650376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.650414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.650444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.655006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.655043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.655072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.659702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.659852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.659872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.664392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.664429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.664459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.050 [2024-11-19 09:47:42.668845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.050 [2024-11-19 09:47:42.668884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.050 [2024-11-19 09:47:42.668914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.310 [2024-11-19 09:47:42.673158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.310 [2024-11-19 09:47:42.673196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.310 [2024-11-19 09:47:42.673240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.310 [2024-11-19 09:47:42.677729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.677770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.677784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.682127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.682165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.682195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.686505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.686658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.686676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.691028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.691070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.691084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.695570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.695612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.695627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.700062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.700101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.700131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.704441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.704512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.704526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.708936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.708975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.709006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.713292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.713330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.713359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.717669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.717709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.717724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.721966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.722005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.722034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.726329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.726368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.726398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.730930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.730970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.730984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.735899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.735939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.735953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.740706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.740747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.740761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.745255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.745293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.745308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.749613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.749653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.749667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.754173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.754224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.754239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.758798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.311 [2024-11-19 09:47:42.758839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.311 [2024-11-19 09:47:42.758854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.311 [2024-11-19 09:47:42.763371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.763410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.763423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.767960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.768011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.768026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.772544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.772599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.772628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.777161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.777200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.777260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.781728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.781768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.781783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.786133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.786174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.786204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.790748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.790966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.790986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.795610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.795809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.795936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.800398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.800607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.800743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.805176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.805400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.805581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.809900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.810099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.810297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.814725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.814911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.815037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.819385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.819560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.819690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.824193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.824425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.824551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.829159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.829353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.829537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.834187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.834417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.834525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.838740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.838780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.838809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.842903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.842957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.842987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.847155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.847218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.312 [2024-11-19 09:47:42.847264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.312 [2024-11-19 09:47:42.851536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.312 [2024-11-19 09:47:42.851580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.851594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.855769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.855837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.855867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.860372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.860412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.860457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.865053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.865094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.865124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.869643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.869829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.869848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.874105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.874147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.874177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.878223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.878258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.878287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.882365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.882406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.882435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.886439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.886485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.886514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.890587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.890624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.890653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.894748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.894784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.894813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.898870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.898906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.898935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.903001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.903037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.903065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.907255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.907293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.907323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.911374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.911413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.911427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.915585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.915640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.915685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.919870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.919907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.919937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.924138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.924175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.924204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.313 [2024-11-19 09:47:42.928535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.313 [2024-11-19 09:47:42.928572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.313 [2024-11-19 09:47:42.928601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.573 [2024-11-19 09:47:42.933155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.933193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.933239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.937522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.937560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.937590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.941851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.941889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.941918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.946138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.946177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.946206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.950421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.950458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.950487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.954449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.954485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.954514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.958547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.958583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.958612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.962716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.962753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.962783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.966795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.966831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.966860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.971275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.971313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.971327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.975672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.975835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.975853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.980350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.980388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.980418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.984819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.984858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.984887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.989124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.989179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.989193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.993625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.993664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.993694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:42.997997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:42.998034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:42.998064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:43.002551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:43.002718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:43.002736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:43.007334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:43.007374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:43.007388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:43.011659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:43.011697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:43.011727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:43.016324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:43.016392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:43.016422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:43.020874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:43.020910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:43.020940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:43.025268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:43.025304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:43.025333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:43.029671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:43.029710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:43.029740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:43.033976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:43.034016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:43.034047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:43.038337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:43.038376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:43.038406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:43.042643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:43.042683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.574 [2024-11-19 09:47:43.042713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.574 [2024-11-19 09:47:43.047085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.574 [2024-11-19 09:47:43.047124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.047138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.051561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.051600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.051615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.056080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.056118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.056148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.060527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.060696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.060715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.065119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.065158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.065188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.069451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.069505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.069535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.073919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.073957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.073987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.078203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.078266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.078296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.082587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.082642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.082673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.086937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.086977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.086991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.091281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.091321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.091335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.095555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.095595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.095609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.099938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.099980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.100011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.104389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.104430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.104461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.108781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.108821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.108837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.113120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.113159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.113189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.117448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.117486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.117516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.121796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.121836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.121866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.126142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.126181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.126195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.130489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.130528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.130542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.134837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.134876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.134906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.575 6432.00 IOPS, 804.00 MiB/s [2024-11-19T09:47:43.198Z] [2024-11-19 09:47:43.140895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.140934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.140964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.145272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.145318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.145347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.149725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.149927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.149945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.154393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.154435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.154449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.158952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.158991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.159022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.163396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.575 [2024-11-19 09:47:43.163438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.575 [2024-11-19 09:47:43.163452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.575 [2024-11-19 09:47:43.167742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.576 [2024-11-19 09:47:43.167781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.576 [2024-11-19 09:47:43.167811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.576 [2024-11-19 09:47:43.172098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.576 [2024-11-19 09:47:43.172142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.576 [2024-11-19 09:47:43.172173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.576 [2024-11-19 09:47:43.176422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.576 [2024-11-19 09:47:43.176461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.576 [2024-11-19 09:47:43.176491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.576 [2024-11-19 09:47:43.180808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.576 [2024-11-19 09:47:43.180861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.576 [2024-11-19 09:47:43.180892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.576 [2024-11-19 09:47:43.185168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.576 [2024-11-19 09:47:43.185236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.576 [2024-11-19 09:47:43.185268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.576 [2024-11-19 09:47:43.189611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.576 [2024-11-19 09:47:43.189650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.576 [2024-11-19 09:47:43.189680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.576 [2024-11-19 09:47:43.194070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.576 [2024-11-19 09:47:43.194109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.576 [2024-11-19 09:47:43.194139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.835 [2024-11-19 09:47:43.198494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.835 [2024-11-19 09:47:43.198532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.835 [2024-11-19 09:47:43.198563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.835 [2024-11-19 09:47:43.202887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.835 [2024-11-19 09:47:43.202924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.835 [2024-11-19 09:47:43.202954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.835 [2024-11-19 09:47:43.207317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.207355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.207369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.211724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.211763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.211793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.216081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.216119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.216149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.220367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.220403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.220433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.224758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.224798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.224828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.229155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.229194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.229237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.233459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.233496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.233527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.237757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.237795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.237825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.242120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.242158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.242187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.246501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.246659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.246677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.251328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.251498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.251635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.256073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.256277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.256460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.260957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.261132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.261364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.265803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.265991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.266121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.270496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.270670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.270802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.275367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.275561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.275688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.280147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.280345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.280487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.285009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.285202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.285342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.289778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.289956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.290085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.294602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.294793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.294938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.299462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.299513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.299529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.303868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.303909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.303924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.308202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.308259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.308273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.312598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.312638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.312652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.316976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.317016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.317030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.321219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.321257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.836 [2024-11-19 09:47:43.321270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.836 [2024-11-19 09:47:43.325571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.836 [2024-11-19 09:47:43.325611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.325624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.329956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.329994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.330007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.334327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.334364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.334378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.338726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.338779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.338792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.343105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.343159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.343172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.347417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.347456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.347471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.351698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.351751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.351765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.356054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.356113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.356127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.360500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.360541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.360555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.364871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.364926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.364940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.369229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.369296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.369310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.373620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.373675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.373688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.377946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.377999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.378013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.382299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.382352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.382366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.386609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.386663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.386677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.390909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.390957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.390971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.395274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.395313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.395327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.399533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.399572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.399585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.403816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.403853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.403867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.408231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.408268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.408282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.412542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.412581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.412595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.416888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.416927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.416940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.421232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.421267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.421281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.425493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.425530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.425544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.429819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.429858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.429872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.434176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.434232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.434247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.438522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.438560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.837 [2024-11-19 09:47:43.438574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:55.837 [2024-11-19 09:47:43.442859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.837 [2024-11-19 09:47:43.442897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.838 [2024-11-19 09:47:43.442912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:55.838 [2024-11-19 09:47:43.447261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.838 [2024-11-19 09:47:43.447299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.838 [2024-11-19 09:47:43.447312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:55.838 [2024-11-19 09:47:43.451519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.838 [2024-11-19 09:47:43.451559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.838 [2024-11-19 09:47:43.451573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:55.838 [2024-11-19 09:47:43.455823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:55.838 [2024-11-19 09:47:43.455862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.838 [2024-11-19 09:47:43.455875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.097 [2024-11-19 09:47:43.460349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.097 [2024-11-19 09:47:43.460404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.460418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.464757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.464796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.464810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.469119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.469173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.469187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.473501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.473538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.473551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.477931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.477985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.477999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.482333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.482387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.482400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.486680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.486733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.486746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.491085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.491138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.491152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.495574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.495613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.495627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.499913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.499967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.499981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.504286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.504322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.504336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.508567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.508606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.508620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.512969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.513007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.513021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.517272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.517311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.517324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.522270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.522307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.522320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.526968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.527007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.527021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.531461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.531500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.531514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.535931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.535971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.535985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.540300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.540337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.540352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.544757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.544798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.544812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.549277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.098 [2024-11-19 09:47:43.549321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.098 [2024-11-19 09:47:43.549335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.098 [2024-11-19 09:47:43.553649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.553688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.553702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.558118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.558158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.558172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.562469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.562508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.562522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.566835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.566873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.566886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.571177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.571241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.571256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.575556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.575596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.575609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.579894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.579947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.579961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.584275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.584313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.584327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.588586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.588641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.588656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.592989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.593043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.593057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.597367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.597421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.597435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.601709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.601746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.601760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.606091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.606129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.606142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.610440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.610478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.610493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.614782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.614835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.614864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.619076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.619129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.619158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.623428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.623466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.623479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.627732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.627768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.627782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.632122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.632176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.632189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.636476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.636516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.636530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.640818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.099 [2024-11-19 09:47:43.640857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.099 [2024-11-19 09:47:43.640871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.099 [2024-11-19 09:47:43.645256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.645309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.645323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.649658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.649695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.649709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.654118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.654173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.654186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.658559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.658597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.658610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.663046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.663102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.663127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.667488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.667530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.667544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.671910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.671971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.671986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.676300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.676340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.676354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.680610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.680652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.680666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.685073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.685152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.685167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.689504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.689559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.689572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.693895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.693957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.693971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.698367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.698407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.698421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.702691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.702730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.702745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.707042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.707081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.707095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.711467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.711519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.711536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.100 [2024-11-19 09:47:43.715854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.100 [2024-11-19 09:47:43.715894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.100 [2024-11-19 09:47:43.715908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.720284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.720326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.720340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.724662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.724703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.724717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.728982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.729020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.729033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.733428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.733468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.733483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.737778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.737817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.737831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.742195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.742259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.742274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.746668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.746724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.746738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.751018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.751074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.751097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.755456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.755497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.755510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.759876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.759933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.759947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.764345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.764385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.764398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.768788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.768830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.768850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.773197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.773263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.773292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.777642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.777696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.777709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.781927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.781979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.782009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.786290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.786328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.786357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.790587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.790625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.790639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.794955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.795009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.795038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.799345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.799383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.362 [2024-11-19 09:47:43.799397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.362 [2024-11-19 09:47:43.803828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.362 [2024-11-19 09:47:43.803868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.803882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.808286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.808326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.808339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.812727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.812767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.812780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.817311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.817349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.817362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.821814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.821853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.821867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.826259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.826296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.826310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.830725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.830764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.830778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.835053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.835091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.835119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.839446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.839484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.839498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.843883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.843921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.843935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.848374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.848413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.848427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.852886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.852939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.852969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.857363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.857415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.857443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.861802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.861856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.861870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.866319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.866381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.866410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.870615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.870669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.870683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.874959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.874999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.875012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.879356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.879396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.879410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.883879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.883917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.883931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.888467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.888519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.888548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.893084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.893136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.893165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.897645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.897715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.897729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.902467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.902505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.902518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.907371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.363 [2024-11-19 09:47:43.907409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.363 [2024-11-19 09:47:43.907423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.363 [2024-11-19 09:47:43.911695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.911732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.911746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.916082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.916119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.916132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.920759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.920797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.920810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.925372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.925425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.925454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.930103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.930141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.930155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.934676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.934730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.934743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.939636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.939674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.939688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.944208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.944272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.944285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.948752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.948791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.948805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.953586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.953640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.953654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.958162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.958237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.958251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.962648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.962716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.962745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.967279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.967319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.967332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.971631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.971666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.971693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.975893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.975945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.975973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.364 [2024-11-19 09:47:43.980628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.364 [2024-11-19 09:47:43.980680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.364 [2024-11-19 09:47:43.980709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:43.985513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:43.985555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:43.985570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:43.989831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:43.989869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:43.989883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:43.994123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:43.994162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:43.994176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:43.998496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:43.998534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:43.998548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.002945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.003000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.003013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.007331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.007379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.007394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.011686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.011737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.011765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.016031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.016084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.016113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.020293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.020346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.020375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.024408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.024460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.024488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.028919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.028974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.029003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.033372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.033423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.033452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.037767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.037803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.037832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.042079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.042161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.042174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.046313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.046365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.046393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.050801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.050839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.050853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.055079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.055118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.055131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.059479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.059518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.059531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.063843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.063881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.063894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.068230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.068268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.068282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.072600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.072638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.072652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.076963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.077017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.077031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.081327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.081379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.081393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.085723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.085776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.085790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.090183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.090246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.090275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.094609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.094662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.094690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.099023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.099060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.099089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.103352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.103390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.103403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.107471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.107525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.107539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.111801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.111852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.111881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.116123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.116175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.116203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.120397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.120449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.120478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.124683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.124734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.124763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.129039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.129091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.129120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.635 [2024-11-19 09:47:44.133267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.635 [2024-11-19 09:47:44.133320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.635 [2024-11-19 09:47:44.133348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.635 6711.50 IOPS, 838.94 MiB/s [2024-11-19T09:47:44.259Z] [2024-11-19 09:47:44.139014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86b400) 00:19:56.636 [2024-11-19 09:47:44.139048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.636 [2024-11-19 09:47:44.139061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.636 00:19:56.636 Latency(us) 00:19:56.636 [2024-11-19T09:47:44.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.636 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:56.636 nvme0n1 : 2.00 6711.43 838.93 0.00 0.00 2380.08 1899.05 9711.24 00:19:56.636 [2024-11-19T09:47:44.259Z] =================================================================================================================== 00:19:56.636 [2024-11-19T09:47:44.259Z] Total : 6711.43 838.93 0.00 0.00 2380.08 1899.05 9711.24 00:19:56.636 { 00:19:56.636 "results": [ 00:19:56.636 { 00:19:56.636 "job": "nvme0n1", 00:19:56.636 "core_mask": "0x2", 00:19:56.636 "workload": "randread", 00:19:56.636 "status": "finished", 00:19:56.636 "queue_depth": 16, 00:19:56.636 "io_size": 131072, 00:19:56.636 "runtime": 2.002405, 00:19:56.636 "iops": 6711.4295060190125, 00:19:56.636 "mibps": 838.9286882523766, 00:19:56.636 "io_failed": 0, 00:19:56.636 "io_timeout": 0, 00:19:56.636 "avg_latency_us": 2380.0785956747322, 00:19:56.636 "min_latency_us": 1899.0545454545454, 00:19:56.636 "max_latency_us": 9711.243636363637 00:19:56.636 } 00:19:56.636 ], 00:19:56.636 "core_count": 1 00:19:56.636 } 00:19:56.636 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:56.636 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:56.636 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:56.636 | .driver_specific 00:19:56.636 | .nvme_error 00:19:56.636 | .status_code 00:19:56.636 | .command_transient_transport_error' 00:19:56.636 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:56.894 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 434 > 0 )) 00:19:56.894 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80471 00:19:56.894 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80471 ']' 00:19:56.894 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80471 00:19:56.894 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:56.894 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.894 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80471 00:19:56.894 killing process with pid 80471 00:19:56.894 Received shutdown signal, test time was about 2.000000 seconds 00:19:56.894 00:19:56.894 Latency(us) 00:19:56.894 [2024-11-19T09:47:44.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.894 [2024-11-19T09:47:44.517Z] =================================================================================================================== 00:19:56.894 [2024-11-19T09:47:44.517Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.894 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:56.894 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:56.894 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80471' 00:19:56.894 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80471 00:19:56.894 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80471 00:19:57.154 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:57.154 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:57.154 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:57.154 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:57.154 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:57.154 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80530 00:19:57.154 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:57.154 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80530 /var/tmp/bperf.sock 00:19:57.154 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80530 ']' 00:19:57.154 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:57.154 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.154 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:57.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:57.154 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.154 09:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:57.154 [2024-11-19 09:47:44.773547] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:19:57.154 [2024-11-19 09:47:44.773700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80530 ] 00:19:57.413 [2024-11-19 09:47:44.926393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.413 [2024-11-19 09:47:44.990157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.671 [2024-11-19 09:47:45.045606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:58.237 09:47:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.237 09:47:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:58.237 09:47:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:58.237 09:47:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:58.495 09:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:58.495 09:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.495 09:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:58.495 09:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.495 09:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:58.495 09:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:59.154 nvme0n1 00:19:59.154 09:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:59.154 09:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.154 09:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:59.154 09:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.154 09:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:59.154 09:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:59.154 Running I/O for 2 seconds... 00:19:59.154 [2024-11-19 09:47:46.633422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fb048 00:19:59.154 [2024-11-19 09:47:46.634928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.154 [2024-11-19 09:47:46.634975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:59.154 [2024-11-19 09:47:46.650126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fb8b8 00:19:59.154 [2024-11-19 09:47:46.651638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.154 [2024-11-19 09:47:46.651677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.154 [2024-11-19 09:47:46.666973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fc128 00:19:59.154 [2024-11-19 09:47:46.668455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.154 [2024-11-19 09:47:46.668493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:59.154 [2024-11-19 09:47:46.683551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fc998 00:19:59.154 [2024-11-19 09:47:46.684946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.154 [2024-11-19 09:47:46.685000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:59.154 [2024-11-19 09:47:46.700128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fd208 00:19:59.154 [2024-11-19 09:47:46.701524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.154 [2024-11-19 09:47:46.701563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:59.154 [2024-11-19 09:47:46.716714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fda78 00:19:59.154 [2024-11-19 09:47:46.718141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.154 [2024-11-19 09:47:46.718188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:59.154 [2024-11-19 09:47:46.733289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fe2e8 00:19:59.154 [2024-11-19 09:47:46.734657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.154 [2024-11-19 09:47:46.734711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:59.154 [2024-11-19 09:47:46.749904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166feb58 00:19:59.154 [2024-11-19 09:47:46.751318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.154 [2024-11-19 09:47:46.751359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:59.154 [2024-11-19 09:47:46.773456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fef90 00:19:59.413 [2024-11-19 09:47:46.776027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:46.776071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:46.790030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166feb58 00:19:59.413 [2024-11-19 09:47:46.792682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:46.792723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:46.806788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fe2e8 00:19:59.413 [2024-11-19 09:47:46.809326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:46.809369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:46.823416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fda78 00:19:59.413 [2024-11-19 09:47:46.826040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:46.826094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:46.840111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fd208 00:19:59.413 [2024-11-19 09:47:46.842638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:46.842678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:46.856606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fc998 00:19:59.413 [2024-11-19 09:47:46.859036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:46.859084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:46.873040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fc128 00:19:59.413 [2024-11-19 09:47:46.875584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:46.875625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:46.889507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fb8b8 00:19:59.413 [2024-11-19 09:47:46.891967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:46.892006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:46.906039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fb048 00:19:59.413 [2024-11-19 09:47:46.908472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:46.908513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:46.922496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166fa7d8 00:19:59.413 [2024-11-19 09:47:46.924899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:46.924939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:46.938948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f9f68 00:19:59.413 [2024-11-19 09:47:46.941333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:46.941371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:46.955367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f96f8 00:19:59.413 [2024-11-19 09:47:46.957728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:46.957767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:46.971929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f8e88 00:19:59.413 [2024-11-19 09:47:46.974262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:46.974304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:46.988404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f8618 00:19:59.413 [2024-11-19 09:47:46.990784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:46.990823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:47.004949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f7da8 00:19:59.413 [2024-11-19 09:47:47.007256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:47.007297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:59.413 [2024-11-19 09:47:47.021341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f7538 00:19:59.413 [2024-11-19 09:47:47.023600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.413 [2024-11-19 09:47:47.023639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:59.672 [2024-11-19 09:47:47.037869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f6cc8 00:19:59.672 [2024-11-19 09:47:47.040125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.672 [2024-11-19 09:47:47.040164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:59.672 [2024-11-19 09:47:47.054277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f6458 00:19:59.672 [2024-11-19 09:47:47.056565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.672 [2024-11-19 09:47:47.056620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:59.672 [2024-11-19 09:47:47.070929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f5be8 00:19:59.672 [2024-11-19 09:47:47.073187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.672 [2024-11-19 09:47:47.073234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:59.672 [2024-11-19 09:47:47.087540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f5378 00:19:59.672 [2024-11-19 09:47:47.089756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.672 [2024-11-19 09:47:47.089797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:59.672 [2024-11-19 09:47:47.104178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f4b08 00:19:59.672 [2024-11-19 09:47:47.106336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.672 [2024-11-19 09:47:47.106385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:59.672 [2024-11-19 09:47:47.120571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f4298 00:19:59.673 [2024-11-19 09:47:47.122757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.673 [2024-11-19 09:47:47.122798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:59.673 [2024-11-19 09:47:47.137079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f3a28 00:19:59.673 [2024-11-19 09:47:47.139235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.673 [2024-11-19 09:47:47.139291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:59.673 [2024-11-19 09:47:47.153549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f31b8 00:19:59.673 [2024-11-19 09:47:47.155679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.673 [2024-11-19 09:47:47.155719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:59.673 [2024-11-19 09:47:47.170005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f2948 00:19:59.673 [2024-11-19 09:47:47.172141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.673 [2024-11-19 09:47:47.172179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.673 [2024-11-19 09:47:47.186427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f20d8 00:19:59.673 [2024-11-19 09:47:47.188524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.673 [2024-11-19 09:47:47.188566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:59.673 [2024-11-19 09:47:47.202828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f1868 00:19:59.673 [2024-11-19 09:47:47.204937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.673 [2024-11-19 09:47:47.204977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:59.673 [2024-11-19 09:47:47.219406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f0ff8 00:19:59.673 [2024-11-19 09:47:47.221444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.673 [2024-11-19 09:47:47.221484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:59.673 [2024-11-19 09:47:47.235893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f0788 00:19:59.673 [2024-11-19 09:47:47.237904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.673 [2024-11-19 09:47:47.237941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:59.673 [2024-11-19 09:47:47.252374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166eff18 00:19:59.673 [2024-11-19 09:47:47.254353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.673 [2024-11-19 09:47:47.254392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:59.673 [2024-11-19 09:47:47.268758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166ef6a8 00:19:59.673 [2024-11-19 09:47:47.270754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.673 [2024-11-19 09:47:47.270794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:59.673 [2024-11-19 09:47:47.285300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166eee38 00:19:59.673 [2024-11-19 09:47:47.287289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.673 [2024-11-19 09:47:47.287458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:59.931 [2024-11-19 09:47:47.301976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166ee5c8 00:19:59.932 [2024-11-19 09:47:47.303955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.303994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:59.932 [2024-11-19 09:47:47.318741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166edd58 00:19:59.932 [2024-11-19 09:47:47.320736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.320776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:59.932 [2024-11-19 09:47:47.335386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166ed4e8 00:19:59.932 [2024-11-19 09:47:47.337463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.337501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:59.932 [2024-11-19 09:47:47.352019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166ecc78 00:19:59.932 [2024-11-19 09:47:47.353895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.353935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:59.932 [2024-11-19 09:47:47.368628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166ec408 00:19:59.932 [2024-11-19 09:47:47.370594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.370632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:59.932 [2024-11-19 09:47:47.385313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166ebb98 00:19:59.932 [2024-11-19 09:47:47.387124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.387162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:59.932 [2024-11-19 09:47:47.401861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166eb328 00:19:59.932 [2024-11-19 09:47:47.403740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.403913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:59.932 [2024-11-19 09:47:47.418773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166eaab8 00:19:59.932 [2024-11-19 09:47:47.420671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.420710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:59.932 [2024-11-19 09:47:47.435613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166ea248 00:19:59.932 [2024-11-19 09:47:47.437654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.437690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.932 [2024-11-19 09:47:47.452880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e99d8 00:19:59.932 [2024-11-19 09:47:47.454662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.454702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:59.932 [2024-11-19 09:47:47.469547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e9168 00:19:59.932 [2024-11-19 09:47:47.471585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.471768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:59.932 [2024-11-19 09:47:47.487173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e88f8 00:19:59.932 [2024-11-19 09:47:47.489123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.489166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:59.932 [2024-11-19 09:47:47.503998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e8088 00:19:59.932 [2024-11-19 09:47:47.505731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.505890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:59.932 [2024-11-19 09:47:47.520998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e7818 00:19:59.932 [2024-11-19 09:47:47.522736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.522777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:59.932 [2024-11-19 09:47:47.537768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e6fa8 00:19:59.932 [2024-11-19 09:47:47.539441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.932 [2024-11-19 09:47:47.539609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:00.191 [2024-11-19 09:47:47.554545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e6738 00:20:00.191 [2024-11-19 09:47:47.556254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.191 [2024-11-19 09:47:47.556299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:00.191 [2024-11-19 09:47:47.571258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e5ec8 00:20:00.191 [2024-11-19 09:47:47.572894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.191 [2024-11-19 09:47:47.572942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:00.191 [2024-11-19 09:47:47.587816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e5658 00:20:00.191 [2024-11-19 09:47:47.589419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.192 [2024-11-19 09:47:47.589458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:00.192 [2024-11-19 09:47:47.604353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e4de8 00:20:00.192 [2024-11-19 09:47:47.605933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.192 [2024-11-19 09:47:47.605975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:00.192 15182.00 IOPS, 59.30 MiB/s [2024-11-19T09:47:47.815Z] [2024-11-19 09:47:47.622554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e4578 00:20:00.192 [2024-11-19 09:47:47.624224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.192 [2024-11-19 09:47:47.624263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:00.192 [2024-11-19 09:47:47.639553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e3d08 00:20:00.192 [2024-11-19 09:47:47.641100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.192 [2024-11-19 09:47:47.641140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:00.192 [2024-11-19 09:47:47.656326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e3498 00:20:00.192 [2024-11-19 09:47:47.658014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.192 [2024-11-19 09:47:47.658174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:00.192 [2024-11-19 09:47:47.673404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e2c28 00:20:00.192 [2024-11-19 09:47:47.674892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.192 [2024-11-19 09:47:47.674932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:00.192 [2024-11-19 09:47:47.690034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e23b8 00:20:00.192 [2024-11-19 09:47:47.691580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.192 [2024-11-19 09:47:47.691619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:00.192 [2024-11-19 09:47:47.706697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e1b48 00:20:00.192 [2024-11-19 09:47:47.708167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.192 [2024-11-19 09:47:47.708220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.192 [2024-11-19 09:47:47.723287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e12d8 00:20:00.192 [2024-11-19 09:47:47.724892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.192 [2024-11-19 09:47:47.724933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:00.192 [2024-11-19 09:47:47.740100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e0a68 00:20:00.192 [2024-11-19 09:47:47.741561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.192 [2024-11-19 09:47:47.741602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:00.192 [2024-11-19 09:47:47.756636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e01f8 00:20:00.192 [2024-11-19 09:47:47.758048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.192 [2024-11-19 09:47:47.758242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:00.192 [2024-11-19 09:47:47.773617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166df988 00:20:00.192 [2024-11-19 09:47:47.775143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.192 [2024-11-19 09:47:47.775185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:00.192 [2024-11-19 09:47:47.790356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166df118 00:20:00.192 [2024-11-19 09:47:47.791747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.192 [2024-11-19 09:47:47.791906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:00.192 [2024-11-19 09:47:47.806977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166de8a8 00:20:00.192 [2024-11-19 09:47:47.808388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.192 [2024-11-19 09:47:47.808426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:47.823546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166de038 00:20:00.451 [2024-11-19 09:47:47.824880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:47.824921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:47.847059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166de038 00:20:00.451 [2024-11-19 09:47:47.849656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:47.849697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:47.863681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166de8a8 00:20:00.451 [2024-11-19 09:47:47.866415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:47.866450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:47.880352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166df118 00:20:00.451 [2024-11-19 09:47:47.882851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:47.883010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:47.897113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166df988 00:20:00.451 [2024-11-19 09:47:47.899722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:47.899763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:47.913817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e01f8 00:20:00.451 [2024-11-19 09:47:47.916289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:47.916458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:47.930415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e0a68 00:20:00.451 [2024-11-19 09:47:47.933116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:47.933151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:47.947088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e12d8 00:20:00.451 [2024-11-19 09:47:47.949590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:47.949630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:47.963840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e1b48 00:20:00.451 [2024-11-19 09:47:47.966367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:47.966406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:47.980537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e23b8 00:20:00.451 [2024-11-19 09:47:47.982920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:47.982973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:47.997145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e2c28 00:20:00.451 [2024-11-19 09:47:47.999535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:47.999575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:48.013602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e3498 00:20:00.451 [2024-11-19 09:47:48.016083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:48.016124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:48.030142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e3d08 00:20:00.451 [2024-11-19 09:47:48.032492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:48.032532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:48.046614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e4578 00:20:00.451 [2024-11-19 09:47:48.048952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:48.049001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:00.451 [2024-11-19 09:47:48.063062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e4de8 00:20:00.451 [2024-11-19 09:47:48.065365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.451 [2024-11-19 09:47:48.065404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:00.710 [2024-11-19 09:47:48.079515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e5658 00:20:00.710 [2024-11-19 09:47:48.081813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.710 [2024-11-19 09:47:48.081855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:00.710 [2024-11-19 09:47:48.095926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e5ec8 00:20:00.710 [2024-11-19 09:47:48.098141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.710 [2024-11-19 09:47:48.098179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:00.710 [2024-11-19 09:47:48.112388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e6738 00:20:00.710 [2024-11-19 09:47:48.114874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.710 [2024-11-19 09:47:48.114915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:00.710 [2024-11-19 09:47:48.129164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e6fa8 00:20:00.710 [2024-11-19 09:47:48.131398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.710 [2024-11-19 09:47:48.131439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:00.710 [2024-11-19 09:47:48.145729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e7818 00:20:00.710 [2024-11-19 09:47:48.147979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.710 [2024-11-19 09:47:48.148031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:00.710 [2024-11-19 09:47:48.162306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e8088 00:20:00.710 [2024-11-19 09:47:48.164460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.710 [2024-11-19 09:47:48.164666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:00.710 [2024-11-19 09:47:48.178913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e88f8 00:20:00.710 [2024-11-19 09:47:48.181085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.711 [2024-11-19 09:47:48.181125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:00.711 [2024-11-19 09:47:48.195408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e9168 00:20:00.711 [2024-11-19 09:47:48.197681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.711 [2024-11-19 09:47:48.197721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:00.711 [2024-11-19 09:47:48.212265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166e99d8 00:20:00.711 [2024-11-19 09:47:48.214458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.711 [2024-11-19 09:47:48.214498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:00.711 [2024-11-19 09:47:48.228849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166ea248 00:20:00.711 [2024-11-19 09:47:48.230975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.711 [2024-11-19 09:47:48.231015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:00.711 [2024-11-19 09:47:48.245389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166eaab8 00:20:00.711 [2024-11-19 09:47:48.247539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.711 [2024-11-19 09:47:48.247585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:00.711 [2024-11-19 09:47:48.261915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166eb328 00:20:00.711 [2024-11-19 09:47:48.264089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.711 [2024-11-19 09:47:48.264190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:00.711 [2024-11-19 09:47:48.281675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166ebb98 00:20:00.711 [2024-11-19 09:47:48.283799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.711 [2024-11-19 09:47:48.283849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:00.711 [2024-11-19 09:47:48.299891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166ec408 00:20:00.711 [2024-11-19 09:47:48.302018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.711 [2024-11-19 09:47:48.302100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:00.711 [2024-11-19 09:47:48.319582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166ecc78 00:20:00.711 [2024-11-19 09:47:48.321575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.711 [2024-11-19 09:47:48.321793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:00.969 [2024-11-19 09:47:48.337989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166ed4e8 00:20:00.969 [2024-11-19 09:47:48.340036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.969 [2024-11-19 09:47:48.340093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:00.969 [2024-11-19 09:47:48.355997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166edd58 00:20:00.969 [2024-11-19 09:47:48.357975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.969 [2024-11-19 09:47:48.358022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:00.969 [2024-11-19 09:47:48.374222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166ee5c8 00:20:00.969 [2024-11-19 09:47:48.376360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.969 [2024-11-19 09:47:48.376571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:00.969 [2024-11-19 09:47:48.392558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166eee38 00:20:00.969 [2024-11-19 09:47:48.394482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.969 [2024-11-19 09:47:48.394530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:00.969 [2024-11-19 09:47:48.410694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166ef6a8 00:20:00.969 [2024-11-19 09:47:48.412618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.969 [2024-11-19 09:47:48.412667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:00.969 [2024-11-19 09:47:48.428895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166eff18 00:20:00.969 [2024-11-19 09:47:48.430802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.969 [2024-11-19 09:47:48.430852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:00.969 [2024-11-19 09:47:48.446969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f0788 00:20:00.969 [2024-11-19 09:47:48.448895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.969 [2024-11-19 09:47:48.448955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:00.969 [2024-11-19 09:47:48.465231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f0ff8 00:20:00.969 [2024-11-19 09:47:48.467081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.969 [2024-11-19 09:47:48.467131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:00.969 [2024-11-19 09:47:48.483291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f1868 00:20:00.969 [2024-11-19 09:47:48.485135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.969 [2024-11-19 09:47:48.485187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:00.969 [2024-11-19 09:47:48.501352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f20d8 00:20:00.969 [2024-11-19 09:47:48.503224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.969 [2024-11-19 09:47:48.503277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:00.969 [2024-11-19 09:47:48.519503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f2948 00:20:00.969 [2024-11-19 09:47:48.521303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.969 [2024-11-19 09:47:48.521351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:00.969 [2024-11-19 09:47:48.537679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f31b8 00:20:00.969 [2024-11-19 09:47:48.539547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.969 [2024-11-19 09:47:48.539595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:00.969 [2024-11-19 09:47:48.555871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f3a28 00:20:00.969 [2024-11-19 09:47:48.557732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.970 [2024-11-19 09:47:48.557788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:00.970 [2024-11-19 09:47:48.574207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f4298 00:20:00.970 [2024-11-19 09:47:48.575944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.970 [2024-11-19 09:47:48.576000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:01.228 [2024-11-19 09:47:48.592422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f4b08 00:20:01.228 [2024-11-19 09:47:48.594190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.228 [2024-11-19 09:47:48.594248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:01.228 [2024-11-19 09:47:48.610590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13845b0) with pdu=0x2000166f5378 00:20:01.228 [2024-11-19 09:47:48.612322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.228 [2024-11-19 09:47:48.612378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:01.228 14928.00 IOPS, 58.31 MiB/s 00:20:01.228 Latency(us) 00:20:01.228 [2024-11-19T09:47:48.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.228 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:01.228 nvme0n1 : 2.01 14930.14 58.32 0.00 0.00 8564.54 6374.87 31695.59 00:20:01.228 [2024-11-19T09:47:48.851Z] =================================================================================================================== 00:20:01.228 [2024-11-19T09:47:48.851Z] Total : 14930.14 58.32 0.00 0.00 8564.54 6374.87 31695.59 00:20:01.228 { 00:20:01.228 "results": [ 00:20:01.228 { 00:20:01.228 "job": "nvme0n1", 00:20:01.228 "core_mask": "0x2", 00:20:01.228 "workload": "randwrite", 00:20:01.228 "status": "finished", 00:20:01.228 "queue_depth": 128, 00:20:01.228 "io_size": 4096, 00:20:01.228 "runtime": 2.008286, 00:20:01.228 "iops": 14930.144411702317, 00:20:01.228 "mibps": 58.320876608212174, 00:20:01.228 "io_failed": 0, 00:20:01.228 "io_timeout": 0, 00:20:01.228 "avg_latency_us": 8564.536008537887, 00:20:01.228 "min_latency_us": 6374.865454545455, 00:20:01.228 "max_latency_us": 31695.592727272728 00:20:01.228 } 00:20:01.228 ], 00:20:01.228 "core_count": 1 00:20:01.228 } 00:20:01.228 09:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:01.228 09:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:01.228 | .driver_specific 00:20:01.228 | .nvme_error 00:20:01.228 | .status_code 00:20:01.228 | .command_transient_transport_error' 00:20:01.228 09:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:01.228 09:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:01.487 09:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 117 > 0 )) 00:20:01.487 09:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80530 00:20:01.487 09:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80530 ']' 00:20:01.487 09:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80530 00:20:01.487 09:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:01.487 09:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.487 09:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80530 00:20:01.487 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:01.487 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:01.487 killing process with pid 80530 00:20:01.487 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80530' 00:20:01.487 Received shutdown signal, test time was about 2.000000 seconds 00:20:01.487 00:20:01.487 Latency(us) 00:20:01.487 [2024-11-19T09:47:49.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.487 [2024-11-19T09:47:49.110Z] =================================================================================================================== 00:20:01.487 [2024-11-19T09:47:49.110Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.487 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80530 00:20:01.487 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80530 00:20:01.745 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:01.745 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:01.745 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:01.745 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:01.745 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:01.745 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80586 00:20:01.745 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80586 /var/tmp/bperf.sock 00:20:01.745 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:01.745 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80586 ']' 00:20:01.745 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:01.745 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:01.745 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:01.745 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.745 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:01.745 [2024-11-19 09:47:49.265161] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:01.745 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:01.745 Zero copy mechanism will not be used. 00:20:01.745 [2024-11-19 09:47:49.265292] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80586 ] 00:20:02.002 [2024-11-19 09:47:49.419872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.002 [2024-11-19 09:47:49.491813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.002 [2024-11-19 09:47:49.551872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:02.002 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.002 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:02.002 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:02.260 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:02.519 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:02.519 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.519 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:02.519 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.519 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:02.519 09:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:02.776 nvme0n1 00:20:02.776 09:47:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:02.776 09:47:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.777 09:47:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:02.777 09:47:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.777 09:47:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:02.777 09:47:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:03.035 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:03.035 Zero copy mechanism will not be used. 00:20:03.035 Running I/O for 2 seconds... 00:20:03.035 [2024-11-19 09:47:50.475801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.035 [2024-11-19 09:47:50.475900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.035 [2024-11-19 09:47:50.475930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.035 [2024-11-19 09:47:50.481419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.035 [2024-11-19 09:47:50.481500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.035 [2024-11-19 09:47:50.481525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.035 [2024-11-19 09:47:50.486656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.035 [2024-11-19 09:47:50.486754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.486777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.491951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.492058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.492082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.497319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.497406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.497430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.502590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.502690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.502713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.507891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.508017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.508040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.513042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.513135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.513159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.518348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.518432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.518455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.523554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.523649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.523671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.528607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.528703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.528726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.533662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.533754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.533776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.538921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.539026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.539049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.544073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.544168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.544191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.549323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.549415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.549438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.554545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.554655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.554678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.559751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.559842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.559866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.564987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.565069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.565093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.570426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.570513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.570537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.575861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.575969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.575992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.581232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.581347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.581370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.586440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.586535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.586558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.591522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.591630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.591652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.596683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.596779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.596802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.601755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.601853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.601875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.606946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.607054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.607077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.036 [2024-11-19 09:47:50.612157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.036 [2024-11-19 09:47:50.612300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.036 [2024-11-19 09:47:50.612323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.037 [2024-11-19 09:47:50.617243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.037 [2024-11-19 09:47:50.617349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-11-19 09:47:50.617371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.037 [2024-11-19 09:47:50.622297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.037 [2024-11-19 09:47:50.622393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-11-19 09:47:50.622416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.037 [2024-11-19 09:47:50.627330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.037 [2024-11-19 09:47:50.627412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-11-19 09:47:50.627436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.037 [2024-11-19 09:47:50.632370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.037 [2024-11-19 09:47:50.632487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-11-19 09:47:50.632510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.037 [2024-11-19 09:47:50.637582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.037 [2024-11-19 09:47:50.637701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-11-19 09:47:50.637723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.037 [2024-11-19 09:47:50.642874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.037 [2024-11-19 09:47:50.642977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-11-19 09:47:50.642999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.037 [2024-11-19 09:47:50.648043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.037 [2024-11-19 09:47:50.648135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-11-19 09:47:50.648157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.037 [2024-11-19 09:47:50.653181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.037 [2024-11-19 09:47:50.653278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-11-19 09:47:50.653319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.296 [2024-11-19 09:47:50.658736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.296 [2024-11-19 09:47:50.658828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.296 [2024-11-19 09:47:50.658859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.296 [2024-11-19 09:47:50.664076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.296 [2024-11-19 09:47:50.664170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.296 [2024-11-19 09:47:50.664192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.296 [2024-11-19 09:47:50.669475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.296 [2024-11-19 09:47:50.669570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.296 [2024-11-19 09:47:50.669592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.296 [2024-11-19 09:47:50.675343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.296 [2024-11-19 09:47:50.675431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.296 [2024-11-19 09:47:50.675453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.682570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.682675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.682698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.688876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.688951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.688974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.694292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.694443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.694465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.699666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.699749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.699772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.705051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.705168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.705190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.710371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.710450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.710473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.715748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.715823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.715846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.720925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.721020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.721043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.726268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.726361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.726383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.731668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.731759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.731781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.737027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.737110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.737132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.742203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.742318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.742341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.747590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.747667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.747691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.752740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.752845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.752868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.757956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.758038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.758061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.763154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.763288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.763312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.768380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.768474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.768497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.773571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.773665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.773688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.778837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.778951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.778973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.784072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.784163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.784186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.789392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.789475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.789498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.794571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.794681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.794704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.799839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.799944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.799966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.805159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.805301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.805324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.810359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.810440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.810464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.815551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.815632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.815655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.297 [2024-11-19 09:47:50.820917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.297 [2024-11-19 09:47:50.821022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.297 [2024-11-19 09:47:50.821045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.826082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.826173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.826196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.831376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.831471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.831495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.836864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.836949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.836974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.842184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.842286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.842309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.847437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.847535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.847557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.852871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.852966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.852988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.858478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.858551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.858574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.864434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.864589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.864612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.871029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.871117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.871140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.876311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.876393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.876415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.881764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.881866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.881888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.887105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.887203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.887239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.892437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.892568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.892590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.897501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.897595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.897617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.902525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.902643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.902665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.907729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.907824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.907861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.298 [2024-11-19 09:47:50.912942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.298 [2024-11-19 09:47:50.913063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.298 [2024-11-19 09:47:50.913093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.918411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.918501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.918524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.923510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.923588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.923611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.928702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.928797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.928819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.933866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.933967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.933988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.938969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.939080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.939107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.944055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.944155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.944177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.949371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.949473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.949502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.954472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.954578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.954601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.959646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.959763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.959785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.964881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.964982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.965003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.969996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.970097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.970118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.975002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.975102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.975124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.980136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.980250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.980272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.985645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.985741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.985763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.991044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.991147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.991169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:50.996078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:50.996182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:50.996204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.001202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.001315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.001336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.006293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.006413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.006435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.011179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.011376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.011398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.016155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.016280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.016301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.021149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.021243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.021278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.026120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.026248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.026270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.031147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.031294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.031317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.036054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.036149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.036170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.041180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.041280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.041301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.046289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.046392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.046414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.051308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.051406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.051430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.056566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.056691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.056712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.061979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.062099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.062121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.068002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.068093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.068115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.073368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.073490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.073512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.078568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.078687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.078709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.083624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.083742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.083764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.088729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.088834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.088857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.093771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.093869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.093891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.098796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.098900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.559 [2024-11-19 09:47:51.098922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.559 [2024-11-19 09:47:51.104042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.559 [2024-11-19 09:47:51.104144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.104166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.560 [2024-11-19 09:47:51.109109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.560 [2024-11-19 09:47:51.109190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.109212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.560 [2024-11-19 09:47:51.114167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.560 [2024-11-19 09:47:51.114301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.114323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.560 [2024-11-19 09:47:51.119116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.560 [2024-11-19 09:47:51.119250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.119273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.560 [2024-11-19 09:47:51.124169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.560 [2024-11-19 09:47:51.124296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.124318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.560 [2024-11-19 09:47:51.129199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.560 [2024-11-19 09:47:51.129312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.129334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.560 [2024-11-19 09:47:51.134335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.560 [2024-11-19 09:47:51.134439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.134461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.560 [2024-11-19 09:47:51.139303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.560 [2024-11-19 09:47:51.139378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.139401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.560 [2024-11-19 09:47:51.144489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.560 [2024-11-19 09:47:51.144591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.144613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.560 [2024-11-19 09:47:51.149632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.560 [2024-11-19 09:47:51.149759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.149782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.560 [2024-11-19 09:47:51.154901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.560 [2024-11-19 09:47:51.155023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.155060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.560 [2024-11-19 09:47:51.160266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.560 [2024-11-19 09:47:51.160375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.160398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.560 [2024-11-19 09:47:51.165386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.560 [2024-11-19 09:47:51.165482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.165504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.560 [2024-11-19 09:47:51.170673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.560 [2024-11-19 09:47:51.170771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.170793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.560 [2024-11-19 09:47:51.175923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.560 [2024-11-19 09:47:51.176012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.560 [2024-11-19 09:47:51.176035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.819 [2024-11-19 09:47:51.181154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.819 [2024-11-19 09:47:51.181247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.819 [2024-11-19 09:47:51.181271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.819 [2024-11-19 09:47:51.186460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.819 [2024-11-19 09:47:51.186550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.819 [2024-11-19 09:47:51.186573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.819 [2024-11-19 09:47:51.191747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.819 [2024-11-19 09:47:51.191850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.819 [2024-11-19 09:47:51.191873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.819 [2024-11-19 09:47:51.197025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.819 [2024-11-19 09:47:51.197155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.197177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.202249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.202368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.202391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.207569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.207660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.207683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.212727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.212816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.212839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.217994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.218077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.218100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.223276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.223358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.223381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.228403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.228485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.228508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.233750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.233844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.233866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.239111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.239269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.239292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.244612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.244741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.244763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.250108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.250205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.250257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.255279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.255376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.255398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.260459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.260554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.260576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.265548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.265641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.265663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.270716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.270813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.270835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.275742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.275860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.275883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.280833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.280928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.280950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.285909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.286000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.286022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.291355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.291450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.291473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.296818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.296962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.296998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.301946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.302027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.302048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.307014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.307109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.307130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.312144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.312246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.312268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.317166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.317272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.317295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.322334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.322429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.322451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.327355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.327440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.327463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.332418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.332521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.332543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.337575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.337683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.337705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.342872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.342999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.343022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.348154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.348266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.348287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.353328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.353433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.353454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.358399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.358502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.358524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.363365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.363462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.363485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.368354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.368456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.368478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.373278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.373378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.373398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.379613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.379732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.379754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.386295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.386399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.386420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.393140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.393279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.393301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.399734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.399854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.399875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.405892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.406006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.406029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.413065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.413163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.413185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.419681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.419790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.419812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.426206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.426355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.426376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.432691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.432808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.432830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.820 [2024-11-19 09:47:51.439645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:03.820 [2024-11-19 09:47:51.439727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.820 [2024-11-19 09:47:51.439749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.081 [2024-11-19 09:47:51.446422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.081 [2024-11-19 09:47:51.446506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.081 [2024-11-19 09:47:51.446529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.081 [2024-11-19 09:47:51.453230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.081 [2024-11-19 09:47:51.453348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.081 [2024-11-19 09:47:51.453370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.081 [2024-11-19 09:47:51.459929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.081 [2024-11-19 09:47:51.460049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.081 [2024-11-19 09:47:51.460070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.081 [2024-11-19 09:47:51.465932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.081 [2024-11-19 09:47:51.466026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.081 [2024-11-19 09:47:51.466047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.081 5747.00 IOPS, 718.38 MiB/s [2024-11-19T09:47:51.704Z] [2024-11-19 09:47:51.472346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.081 [2024-11-19 09:47:51.472455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.472479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.477554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.477664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.477686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.482963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.483054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.483077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.488070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.488165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.488187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.493468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.493558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.493582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.498782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.498889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.498912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.504099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.504191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.504226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.509381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.509450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.509473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.514706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.514801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.514823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.520006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.520109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.520131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.525322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.525424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.525447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.530552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.530654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.530676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.535811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.535905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.535928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.541020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.541122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.541144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.546276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.546407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.546429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.551606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.551689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.551712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.557073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.557153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.557176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.562109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.562209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.562231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.567339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.567421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.567444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.572728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.572832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.572856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.578130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.578249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.578273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.583484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.583592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.583614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.588658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.588742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.082 [2024-11-19 09:47:51.588766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.082 [2024-11-19 09:47:51.593821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.082 [2024-11-19 09:47:51.593911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.593934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.599137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.599270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.599293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.604589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.604711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.604733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.609972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.610053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.610075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.615372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.615466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.615488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.620572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.620669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.620692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.625771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.625853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.625875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.630929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.631032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.631054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.636087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.636192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.636215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.641233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.641329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.641351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.646460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.646553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.646575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.651613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.651696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.651719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.656745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.656825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.656847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.661914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.661989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.662012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.667058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.667159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.667181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.672296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.672381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.672403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.677444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.677529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.677552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.682629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.682725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.682748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.687844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.687966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.687988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.693039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.693134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.693157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.083 [2024-11-19 09:47:51.698368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.083 [2024-11-19 09:47:51.698453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.083 [2024-11-19 09:47:51.698475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.343 [2024-11-19 09:47:51.703526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.343 [2024-11-19 09:47:51.703616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.343 [2024-11-19 09:47:51.703640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.708712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.708804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.708826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.714064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.714164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.714186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.720143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.720233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.720256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.725568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.725671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.725694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.730884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.730963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.730986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.736330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.736419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.736443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.741698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.741775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.741798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.746887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.746985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.747008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.752096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.752194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.752217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.757353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.757435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.757458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.762620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.762704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.762732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.768028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.768140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.768163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.773127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.773258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.773283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.778195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.778303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.778325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.783361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.783458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.783482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.788373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.788486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.788508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.793374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.793471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.793493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.798471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.798573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.798594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.803527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.803609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.803633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.808607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.808704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.808725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.813946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.814112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.814133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.819096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.819233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.819257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.824151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.824265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.824299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.829169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.829302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.829325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.834332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.834427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.834448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.839387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.839469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.344 [2024-11-19 09:47:51.839492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.344 [2024-11-19 09:47:51.844435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.344 [2024-11-19 09:47:51.844530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.844552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.849486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.849584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.849606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.854730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.854816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.854839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.860101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.860197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.860219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.865344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.865429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.865452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.871659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.871745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.871768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.877367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.877456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.877479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.883239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.883359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.883382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.890305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.890426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.890451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.896112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.896206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.896230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.901586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.901671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.901694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.906931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.907026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.907048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.912283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.912368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.912391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.917713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.917810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.917832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.922885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.922990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.923012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.928201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.928297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.928319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.933426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.933559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.933581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.938540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.938660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.938682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.943846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.943953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.943976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.949093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.949191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.949214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.954560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.954643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.954666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.345 [2024-11-19 09:47:51.959733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.345 [2024-11-19 09:47:51.959826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.345 [2024-11-19 09:47:51.959848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:51.964994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:51.965063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.606 [2024-11-19 09:47:51.965086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:51.970255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:51.970349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.606 [2024-11-19 09:47:51.970372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:51.975427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:51.975509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.606 [2024-11-19 09:47:51.975532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:51.980793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:51.980886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.606 [2024-11-19 09:47:51.980908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:51.986055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:51.986136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.606 [2024-11-19 09:47:51.986158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:51.991368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:51.991468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.606 [2024-11-19 09:47:51.991491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:51.996541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:51.996622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.606 [2024-11-19 09:47:51.996645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:52.001723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:52.001821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.606 [2024-11-19 09:47:52.001844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:52.006875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:52.006956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.606 [2024-11-19 09:47:52.006979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:52.012123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:52.012228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.606 [2024-11-19 09:47:52.012265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:52.017497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:52.017578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.606 [2024-11-19 09:47:52.017611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:52.022783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:52.022867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.606 [2024-11-19 09:47:52.022889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:52.027974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:52.028080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.606 [2024-11-19 09:47:52.028108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:52.033168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:52.033275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.606 [2024-11-19 09:47:52.033298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.606 [2024-11-19 09:47:52.038347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.606 [2024-11-19 09:47:52.038428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.038459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.043586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.043679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.043702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.048724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.048805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.048828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.054367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.054455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.054478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.059678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.059758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.059780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.064896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.065006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.065029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.070015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.070112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.070135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.075186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.075326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.075349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.080420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.080521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.080544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.085560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.085658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.085681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.090741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.090837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.090859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.096074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.096178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.096201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.101341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.101451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.101473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.106492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.106564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.106590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.111777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.111884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.111907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.117003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.117101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.117123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.122331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.122412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.122434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.127773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.127868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.127891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.132981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.133095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.133117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.138203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.138333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.138355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.143446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.143531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.143553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.148648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.148751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.148774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.153871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.153966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.153988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.159161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.159308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.159331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.164424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.164539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.164562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.169614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.169711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.169734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.174724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.174819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.607 [2024-11-19 09:47:52.174841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.607 [2024-11-19 09:47:52.179904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.607 [2024-11-19 09:47:52.180014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.608 [2024-11-19 09:47:52.180036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.608 [2024-11-19 09:47:52.185104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.608 [2024-11-19 09:47:52.185207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.608 [2024-11-19 09:47:52.185229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.608 [2024-11-19 09:47:52.190166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.608 [2024-11-19 09:47:52.190283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.608 [2024-11-19 09:47:52.190306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.608 [2024-11-19 09:47:52.195187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.608 [2024-11-19 09:47:52.195313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.608 [2024-11-19 09:47:52.195336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.608 [2024-11-19 09:47:52.200346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.608 [2024-11-19 09:47:52.200442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.608 [2024-11-19 09:47:52.200480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.608 [2024-11-19 09:47:52.205469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.608 [2024-11-19 09:47:52.205551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.608 [2024-11-19 09:47:52.205573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.608 [2024-11-19 09:47:52.210616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.608 [2024-11-19 09:47:52.210720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.608 [2024-11-19 09:47:52.210743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.608 [2024-11-19 09:47:52.215783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.608 [2024-11-19 09:47:52.215880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.608 [2024-11-19 09:47:52.215902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.608 [2024-11-19 09:47:52.220971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.608 [2024-11-19 09:47:52.221075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.608 [2024-11-19 09:47:52.221097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.608 [2024-11-19 09:47:52.226199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.608 [2024-11-19 09:47:52.226317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.608 [2024-11-19 09:47:52.226339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.867 [2024-11-19 09:47:52.231339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.867 [2024-11-19 09:47:52.231424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.867 [2024-11-19 09:47:52.231447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.867 [2024-11-19 09:47:52.236549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.867 [2024-11-19 09:47:52.236646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.867 [2024-11-19 09:47:52.236668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.867 [2024-11-19 09:47:52.241651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.867 [2024-11-19 09:47:52.241746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.867 [2024-11-19 09:47:52.241768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.867 [2024-11-19 09:47:52.246716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.867 [2024-11-19 09:47:52.246809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.867 [2024-11-19 09:47:52.246831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.867 [2024-11-19 09:47:52.252662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.867 [2024-11-19 09:47:52.252762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.867 [2024-11-19 09:47:52.252790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.867 [2024-11-19 09:47:52.258724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.867 [2024-11-19 09:47:52.258869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.867 [2024-11-19 09:47:52.258898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.867 [2024-11-19 09:47:52.264408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.867 [2024-11-19 09:47:52.264517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.867 [2024-11-19 09:47:52.264542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.867 [2024-11-19 09:47:52.269590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.867 [2024-11-19 09:47:52.269684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.867 [2024-11-19 09:47:52.269707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.867 [2024-11-19 09:47:52.274729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.867 [2024-11-19 09:47:52.274824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.867 [2024-11-19 09:47:52.274846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.867 [2024-11-19 09:47:52.279924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.867 [2024-11-19 09:47:52.280029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.867 [2024-11-19 09:47:52.280051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.867 [2024-11-19 09:47:52.285139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.285246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.285283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.290286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.290401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.290423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.295347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.295436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.295459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.300554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.300642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.300665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.305832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.305934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.305957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.311081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.311163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.311185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.316176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.316283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.316306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.321320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.321428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.321450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.326565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.326656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.326679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.332176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.332271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.332294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.337822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.337942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.337973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.343004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.343112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.343136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.348207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.348335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.348358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.353336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.353445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.353468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.358443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.358548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.358571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.363621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.363718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.363741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.368800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.368930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.368962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.373910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.374026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.374048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.381221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.381387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.381409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.388413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.388554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.388577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.395406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.395536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.395559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.401785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.401864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.401888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.407503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.407581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.407606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.412897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.412996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.413019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.418207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.418326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.418349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.423505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.423582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.423605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.428732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.868 [2024-11-19 09:47:52.428813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.868 [2024-11-19 09:47:52.428836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.868 [2024-11-19 09:47:52.434032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.869 [2024-11-19 09:47:52.434126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.869 [2024-11-19 09:47:52.434149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.869 [2024-11-19 09:47:52.439140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.869 [2024-11-19 09:47:52.439270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.869 [2024-11-19 09:47:52.439293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.869 [2024-11-19 09:47:52.444301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.869 [2024-11-19 09:47:52.444406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.869 [2024-11-19 09:47:52.444428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.869 [2024-11-19 09:47:52.449450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.869 [2024-11-19 09:47:52.449565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.869 [2024-11-19 09:47:52.449588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.869 [2024-11-19 09:47:52.454556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.869 [2024-11-19 09:47:52.454671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.869 [2024-11-19 09:47:52.454693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:04.869 [2024-11-19 09:47:52.459662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.869 [2024-11-19 09:47:52.459777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.869 [2024-11-19 09:47:52.459799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:04.869 [2024-11-19 09:47:52.464926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.869 [2024-11-19 09:47:52.465030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.869 [2024-11-19 09:47:52.465053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.869 5789.50 IOPS, 723.69 MiB/s [2024-11-19T09:47:52.492Z] [2024-11-19 09:47:52.470783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1384750) with pdu=0x2000166ff3c8 00:20:04.869 [2024-11-19 09:47:52.470878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.869 [2024-11-19 09:47:52.470901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:04.869 00:20:04.869 Latency(us) 00:20:04.869 [2024-11-19T09:47:52.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.869 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:04.869 nvme0n1 : 2.00 5788.36 723.54 0.00 0.00 2757.95 1668.19 11260.28 00:20:04.869 [2024-11-19T09:47:52.492Z] =================================================================================================================== 00:20:04.869 [2024-11-19T09:47:52.492Z] Total : 5788.36 723.54 0.00 0.00 2757.95 1668.19 11260.28 00:20:04.869 { 00:20:04.869 "results": [ 00:20:04.869 { 00:20:04.869 "job": "nvme0n1", 00:20:04.869 "core_mask": "0x2", 00:20:04.869 "workload": "randwrite", 00:20:04.869 "status": "finished", 00:20:04.869 "queue_depth": 16, 00:20:04.869 "io_size": 131072, 00:20:04.869 "runtime": 2.004195, 00:20:04.869 "iops": 5788.358917171234, 00:20:04.869 "mibps": 723.5448646464042, 00:20:04.869 "io_failed": 0, 00:20:04.869 "io_timeout": 0, 00:20:04.869 "avg_latency_us": 2757.9476552961737, 00:20:04.869 "min_latency_us": 1668.189090909091, 00:20:04.869 "max_latency_us": 11260.276363636363 00:20:04.869 } 00:20:04.869 ], 00:20:04.869 "core_count": 1 00:20:04.869 } 00:20:05.128 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:05.128 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:05.128 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:05.128 | .driver_specific 00:20:05.128 | .nvme_error 00:20:05.128 | .status_code 00:20:05.128 | .command_transient_transport_error' 00:20:05.128 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:05.386 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 375 > 0 )) 00:20:05.386 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80586 00:20:05.386 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80586 ']' 00:20:05.386 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80586 00:20:05.386 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:05.386 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.386 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80586 00:20:05.386 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:05.386 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:05.386 killing process with pid 80586 00:20:05.386 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80586' 00:20:05.386 Received shutdown signal, test time was about 2.000000 seconds 00:20:05.386 00:20:05.386 Latency(us) 00:20:05.386 [2024-11-19T09:47:53.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.386 [2024-11-19T09:47:53.009Z] =================================================================================================================== 00:20:05.386 [2024-11-19T09:47:53.009Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.386 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80586 00:20:05.386 09:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80586 00:20:05.387 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80386 00:20:05.387 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80386 ']' 00:20:05.387 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80386 00:20:05.647 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:05.647 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.647 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80386 00:20:05.647 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.647 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.647 killing process with pid 80386 00:20:05.647 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80386' 00:20:05.647 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80386 00:20:05.647 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80386 00:20:05.647 00:20:05.647 real 0m17.922s 00:20:05.647 user 0m35.733s 00:20:05.647 sys 0m4.812s 00:20:05.647 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.647 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:05.647 ************************************ 00:20:05.647 END TEST nvmf_digest_error 00:20:05.647 ************************************ 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:05.908 rmmod nvme_tcp 00:20:05.908 rmmod nvme_fabrics 00:20:05.908 rmmod nvme_keyring 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80386 ']' 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80386 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80386 ']' 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80386 00:20:05.908 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80386) - No such process 00:20:05.908 Process with pid 80386 is not found 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80386 is not found' 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:05.908 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:05.909 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:05.909 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:05.909 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:05.909 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:05.909 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:20:06.167 00:20:06.167 real 0m35.612s 00:20:06.167 user 1m8.410s 00:20:06.167 sys 0m10.177s 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:06.167 ************************************ 00:20:06.167 END TEST nvmf_digest 00:20:06.167 ************************************ 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.167 ************************************ 00:20:06.167 START TEST nvmf_host_multipath 00:20:06.167 ************************************ 00:20:06.167 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:06.427 * Looking for test storage... 00:20:06.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:06.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.427 --rc genhtml_branch_coverage=1 00:20:06.427 --rc genhtml_function_coverage=1 00:20:06.427 --rc genhtml_legend=1 00:20:06.427 --rc geninfo_all_blocks=1 00:20:06.427 --rc geninfo_unexecuted_blocks=1 00:20:06.427 00:20:06.427 ' 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:06.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.427 --rc genhtml_branch_coverage=1 00:20:06.427 --rc genhtml_function_coverage=1 00:20:06.427 --rc genhtml_legend=1 00:20:06.427 --rc geninfo_all_blocks=1 00:20:06.427 --rc geninfo_unexecuted_blocks=1 00:20:06.427 00:20:06.427 ' 00:20:06.427 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:06.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.428 --rc genhtml_branch_coverage=1 00:20:06.428 --rc genhtml_function_coverage=1 00:20:06.428 --rc genhtml_legend=1 00:20:06.428 --rc geninfo_all_blocks=1 00:20:06.428 --rc geninfo_unexecuted_blocks=1 00:20:06.428 00:20:06.428 ' 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:06.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.428 --rc genhtml_branch_coverage=1 00:20:06.428 --rc genhtml_function_coverage=1 00:20:06.428 --rc genhtml_legend=1 00:20:06.428 --rc geninfo_all_blocks=1 00:20:06.428 --rc geninfo_unexecuted_blocks=1 00:20:06.428 00:20:06.428 ' 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:06.428 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:06.428 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:06.429 Cannot find device "nvmf_init_br" 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:06.429 Cannot find device "nvmf_init_br2" 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:06.429 Cannot find device "nvmf_tgt_br" 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:06.429 Cannot find device "nvmf_tgt_br2" 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:20:06.429 09:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:06.429 Cannot find device "nvmf_init_br" 00:20:06.429 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:20:06.429 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:06.429 Cannot find device "nvmf_init_br2" 00:20:06.429 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:20:06.429 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:06.429 Cannot find device "nvmf_tgt_br" 00:20:06.429 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:20:06.429 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:06.429 Cannot find device "nvmf_tgt_br2" 00:20:06.429 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:20:06.429 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:06.687 Cannot find device "nvmf_br" 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:06.687 Cannot find device "nvmf_init_if" 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:06.687 Cannot find device "nvmf_init_if2" 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:06.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:06.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:06.687 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:06.946 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:06.946 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:20:06.946 00:20:06.946 --- 10.0.0.3 ping statistics --- 00:20:06.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.946 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:06.946 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:06.946 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:20:06.946 00:20:06.946 --- 10.0.0.4 ping statistics --- 00:20:06.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.946 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:06.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:06.946 00:20:06.946 --- 10.0.0.1 ping statistics --- 00:20:06.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.946 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:06.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:20:06.946 00:20:06.946 --- 10.0.0.2 ping statistics --- 00:20:06.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.946 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80906 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80906 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80906 ']' 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.946 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:06.947 [2024-11-19 09:47:54.436967] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:20:06.947 [2024-11-19 09:47:54.437071] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.205 [2024-11-19 09:47:54.590411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:07.205 [2024-11-19 09:47:54.661821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.205 [2024-11-19 09:47:54.661894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.205 [2024-11-19 09:47:54.661917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.205 [2024-11-19 09:47:54.661934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.205 [2024-11-19 09:47:54.661943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.205 [2024-11-19 09:47:54.663341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.205 [2024-11-19 09:47:54.663355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.205 [2024-11-19 09:47:54.721967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:07.205 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.205 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:07.205 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.205 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.205 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:07.477 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.477 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80906 00:20:07.477 09:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:07.744 [2024-11-19 09:47:55.113435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.744 09:47:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:08.003 Malloc0 00:20:08.003 09:47:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:08.261 09:47:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:08.519 09:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:09.084 [2024-11-19 09:47:56.403706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:09.084 09:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:09.343 [2024-11-19 09:47:56.723902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:09.343 09:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80960 00:20:09.343 09:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:09.343 09:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.343 09:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80960 /var/tmp/bdevperf.sock 00:20:09.343 09:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80960 ']' 00:20:09.343 09:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.343 09:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.343 09:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.343 09:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.343 09:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:09.601 09:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.601 09:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:09.601 09:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:09.858 09:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:10.478 Nvme0n1 00:20:10.478 09:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:10.737 Nvme0n1 00:20:10.737 09:47:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:10.737 09:47:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:11.673 09:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:11.673 09:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:11.930 09:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:12.495 09:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:12.495 09:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81003 00:20:12.495 09:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:12.495 09:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:19.056 09:48:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:19.056 09:48:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:19.056 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:19.056 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:19.056 Attaching 4 probes... 00:20:19.056 @path[10.0.0.3, 4421]: 17509 00:20:19.056 @path[10.0.0.3, 4421]: 18209 00:20:19.056 @path[10.0.0.3, 4421]: 17728 00:20:19.056 @path[10.0.0.3, 4421]: 17777 00:20:19.056 @path[10.0.0.3, 4421]: 17546 00:20:19.056 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:19.056 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:19.056 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:19.056 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:19.056 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:19.056 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:19.056 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81003 00:20:19.056 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:19.056 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:19.056 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:19.056 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:19.314 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:19.314 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81116 00:20:19.314 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:19.314 09:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:25.877 09:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:25.877 09:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:25.877 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:25.877 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:25.877 Attaching 4 probes... 00:20:25.877 @path[10.0.0.3, 4420]: 17388 00:20:25.877 @path[10.0.0.3, 4420]: 17609 00:20:25.877 @path[10.0.0.3, 4420]: 18072 00:20:25.877 @path[10.0.0.3, 4420]: 17984 00:20:25.877 @path[10.0.0.3, 4420]: 17688 00:20:25.877 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:25.877 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:25.877 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:25.877 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:25.877 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:25.877 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:25.877 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81116 00:20:25.877 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:25.877 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:20:25.877 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:25.877 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:26.136 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:20:26.136 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:26.136 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81230 00:20:26.136 09:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:32.713 09:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:32.713 09:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:32.713 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:32.713 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:32.713 Attaching 4 probes... 00:20:32.713 @path[10.0.0.3, 4421]: 11746 00:20:32.713 @path[10.0.0.3, 4421]: 14984 00:20:32.713 @path[10.0.0.3, 4421]: 15024 00:20:32.713 @path[10.0.0.3, 4421]: 15308 00:20:32.713 @path[10.0.0.3, 4421]: 15017 00:20:32.713 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:32.713 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:32.713 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:32.713 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:32.713 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:32.713 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:32.713 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81230 00:20:32.713 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:32.713 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:20:32.713 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:32.972 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:33.230 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:20:33.230 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81347 00:20:33.230 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:33.230 09:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:39.792 09:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:39.792 09:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:20:39.792 09:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:20:39.792 09:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:39.792 Attaching 4 probes... 00:20:39.792 00:20:39.792 00:20:39.792 00:20:39.792 00:20:39.792 00:20:39.792 09:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:39.792 09:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:39.792 09:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:39.792 09:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:20:39.792 09:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:20:39.792 09:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:20:39.792 09:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81347 00:20:39.792 09:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:39.792 09:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:20:39.792 09:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:39.792 09:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:40.050 09:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:20:40.050 09:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81465 00:20:40.050 09:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:40.050 09:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:46.688 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:46.688 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:46.688 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:46.688 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:46.688 Attaching 4 probes... 00:20:46.688 @path[10.0.0.3, 4421]: 16311 00:20:46.688 @path[10.0.0.3, 4421]: 17173 00:20:46.688 @path[10.0.0.3, 4421]: 17101 00:20:46.688 @path[10.0.0.3, 4421]: 17537 00:20:46.688 @path[10.0.0.3, 4421]: 17637 00:20:46.688 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:46.688 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:46.688 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:46.688 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:46.688 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:46.688 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:46.688 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81465 00:20:46.688 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:46.688 09:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:46.688 09:48:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:47.624 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:47.624 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81584 00:20:47.624 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:47.624 09:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:54.256 09:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:54.256 09:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:54.256 09:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:54.256 09:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:54.256 Attaching 4 probes... 00:20:54.256 @path[10.0.0.3, 4420]: 17800 00:20:54.256 @path[10.0.0.3, 4420]: 18018 00:20:54.256 @path[10.0.0.3, 4420]: 17418 00:20:54.256 @path[10.0.0.3, 4420]: 16381 00:20:54.256 @path[10.0.0.3, 4420]: 18473 00:20:54.256 09:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:54.256 09:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:54.256 09:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:54.256 09:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:54.256 09:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:54.256 09:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:54.256 09:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81584 00:20:54.256 09:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:54.256 09:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:54.256 [2024-11-19 09:48:41.769748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:54.256 09:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:54.515 09:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:01.082 09:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:01.082 09:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81764 00:21:01.082 09:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:01.082 09:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:07.655 Attaching 4 probes... 00:21:07.655 @path[10.0.0.3, 4421]: 17268 00:21:07.655 @path[10.0.0.3, 4421]: 17503 00:21:07.655 @path[10.0.0.3, 4421]: 17215 00:21:07.655 @path[10.0.0.3, 4421]: 16576 00:21:07.655 @path[10.0.0.3, 4421]: 16422 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81764 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80960 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80960 ']' 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80960 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80960 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:07.655 killing process with pid 80960 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80960' 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80960 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80960 00:21:07.655 { 00:21:07.655 "results": [ 00:21:07.655 { 00:21:07.655 "job": "Nvme0n1", 00:21:07.655 "core_mask": "0x4", 00:21:07.655 "workload": "verify", 00:21:07.655 "status": "terminated", 00:21:07.655 "verify_range": { 00:21:07.655 "start": 0, 00:21:07.655 "length": 16384 00:21:07.655 }, 00:21:07.655 "queue_depth": 128, 00:21:07.655 "io_size": 4096, 00:21:07.655 "runtime": 56.074519, 00:21:07.655 "iops": 7339.198041092425, 00:21:07.655 "mibps": 28.668742348017286, 00:21:07.655 "io_failed": 0, 00:21:07.655 "io_timeout": 0, 00:21:07.655 "avg_latency_us": 17412.38049478657, 00:21:07.655 "min_latency_us": 1102.1963636363637, 00:21:07.655 "max_latency_us": 7046430.72 00:21:07.655 } 00:21:07.655 ], 00:21:07.655 "core_count": 1 00:21:07.655 } 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80960 00:21:07.655 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:07.655 [2024-11-19 09:47:56.793814] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:07.655 [2024-11-19 09:47:56.793914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80960 ] 00:21:07.655 [2024-11-19 09:47:56.937948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.655 [2024-11-19 09:47:57.001279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.655 [2024-11-19 09:47:57.058233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:07.655 Running I/O for 90 seconds... 00:21:07.655 6933.00 IOPS, 27.08 MiB/s [2024-11-19T09:48:55.278Z] 7677.50 IOPS, 29.99 MiB/s [2024-11-19T09:48:55.278Z] 8081.00 IOPS, 31.57 MiB/s [2024-11-19T09:48:55.278Z] 8338.75 IOPS, 32.57 MiB/s [2024-11-19T09:48:55.278Z] 8458.20 IOPS, 33.04 MiB/s [2024-11-19T09:48:55.278Z] 8523.17 IOPS, 33.29 MiB/s [2024-11-19T09:48:55.278Z] 8565.00 IOPS, 33.46 MiB/s [2024-11-19T09:48:55.278Z] 8587.38 IOPS, 33.54 MiB/s [2024-11-19T09:48:55.278Z] [2024-11-19 09:48:06.791929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.655 [2024-11-19 09:48:06.792002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:07.655 [2024-11-19 09:48:06.792062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.655 [2024-11-19 09:48:06.792085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.655 [2024-11-19 09:48:06.792109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.655 [2024-11-19 09:48:06.792127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.655 [2024-11-19 09:48:06.792150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.655 [2024-11-19 09:48:06.792166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:07.655 [2024-11-19 09:48:06.792189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.792978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.792995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.793033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.656 [2024-11-19 09:48:06.793672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.793737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:07.656 [2024-11-19 09:48:06.793759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.656 [2024-11-19 09:48:06.793776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.793798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.793815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.793838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.793853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.793876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.793892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.793913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.793929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.793962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.793980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.657 [2024-11-19 09:48:06.794057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.657 [2024-11-19 09:48:06.794105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.657 [2024-11-19 09:48:06.794144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.657 [2024-11-19 09:48:06.794182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.657 [2024-11-19 09:48:06.794237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.657 [2024-11-19 09:48:06.794278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.657 [2024-11-19 09:48:06.794316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.657 [2024-11-19 09:48:06.794354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.794964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.794988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.795012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.795029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.795051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.795067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.795089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.795104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.795126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.795142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.795164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.795180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.795228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.795250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.795273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.795289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.795311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.795328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.795349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.657 [2024-11-19 09:48:06.795365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:07.657 [2024-11-19 09:48:06.795387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.795403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.795426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.795447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.795470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.795487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.795526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.795544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.795566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.795582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.795604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.795620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.795641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.795657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.795679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.795695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.795733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.795754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.795777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.795794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.795815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.795831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.795853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.795869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.795891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.795906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.795928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.795944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.795966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.795982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.796030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.796400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.796438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.796475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.796521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.796561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.796599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.796637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.658 [2024-11-19 09:48:06.796675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:07.658 [2024-11-19 09:48:06.796931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.658 [2024-11-19 09:48:06.796947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:06.798548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.659 [2024-11-19 09:48:06.798595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:06.798628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:06.798646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:06.798669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:06.798686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:06.798708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:06.798724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:06.798746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:06.798762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:06.798784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:06.798800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:06.798822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:06.798838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:06.798860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:06.798877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:06.798913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:06.798935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:07.659 8589.78 IOPS, 33.55 MiB/s [2024-11-19T09:48:55.282Z] 8614.80 IOPS, 33.65 MiB/s [2024-11-19T09:48:55.282Z] 8636.73 IOPS, 33.74 MiB/s [2024-11-19T09:48:55.282Z] 8667.67 IOPS, 33.86 MiB/s [2024-11-19T09:48:55.282Z] 8690.15 IOPS, 33.95 MiB/s [2024-11-19T09:48:55.282Z] 8701.43 IOPS, 33.99 MiB/s [2024-11-19T09:48:55.282Z] 8707.47 IOPS, 34.01 MiB/s [2024-11-19T09:48:55.282Z] [2024-11-19 09:48:13.457116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.659 [2024-11-19 09:48:13.457924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.659 [2024-11-19 09:48:13.457960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.457984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.659 [2024-11-19 09:48:13.458017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.458041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.659 [2024-11-19 09:48:13.458056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.458075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.659 [2024-11-19 09:48:13.458090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.458110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.659 [2024-11-19 09:48:13.458124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.458144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.659 [2024-11-19 09:48:13.458158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.458178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.659 [2024-11-19 09:48:13.458192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:07.659 [2024-11-19 09:48:13.458235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.458253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.660 [2024-11-19 09:48:13.458296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.660 [2024-11-19 09:48:13.458331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.660 [2024-11-19 09:48:13.458366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.660 [2024-11-19 09:48:13.458400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.660 [2024-11-19 09:48:13.458446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.660 [2024-11-19 09:48:13.458480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.660 [2024-11-19 09:48:13.458514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.660 [2024-11-19 09:48:13.458549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.458583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.458618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.458652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.458687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.458721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.458755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.458789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.458823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.458902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.458942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.458962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.458993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.459058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.459111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.459151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.459188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.459272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.459311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.459350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.459387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.459424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.459472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.459511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.459549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.660 [2024-11-19 09:48:13.459586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.660 [2024-11-19 09:48:13.459624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.660 [2024-11-19 09:48:13.459662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.660 [2024-11-19 09:48:13.459700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.660 [2024-11-19 09:48:13.459767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:07.660 [2024-11-19 09:48:13.459787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.459824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.459850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.459866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.459886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.459901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.459920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.459934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.459954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.459977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.459998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.460322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.460358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.460393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.460427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.460462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.460541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.460577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.460612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.460957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.460979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.460994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.461054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.461075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.461116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.461149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.461179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.461197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.461232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.461250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.461271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.461286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.461308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.461323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.461344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.661 [2024-11-19 09:48:13.461359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.461379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.461395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.461416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.661 [2024-11-19 09:48:13.461438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:07.661 [2024-11-19 09:48:13.461459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.662 [2024-11-19 09:48:13.461489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.461510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.662 [2024-11-19 09:48:13.461524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.461545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.662 [2024-11-19 09:48:13.461559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.461580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.662 [2024-11-19 09:48:13.461604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.461626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.662 [2024-11-19 09:48:13.461642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.462479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.662 [2024-11-19 09:48:13.462509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.462541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.462559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.462585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.462600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.462626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.462641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.462667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.462682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.462707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.462722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.462747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.462762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.462788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.462803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.462843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.462862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.462938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.462959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.462993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.463069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.463113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.463156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.463198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.463286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.463348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.463416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.463466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.463511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.463572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.463630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.463693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.463757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:13.463816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:13.463832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:07.662 8199.75 IOPS, 32.03 MiB/s [2024-11-19T09:48:55.285Z] 8122.18 IOPS, 31.73 MiB/s [2024-11-19T09:48:55.285Z] 8087.83 IOPS, 31.59 MiB/s [2024-11-19T09:48:55.285Z] 8057.53 IOPS, 31.47 MiB/s [2024-11-19T09:48:55.285Z] 8036.65 IOPS, 31.39 MiB/s [2024-11-19T09:48:55.285Z] 8010.90 IOPS, 31.29 MiB/s [2024-11-19T09:48:55.285Z] 7995.86 IOPS, 31.23 MiB/s [2024-11-19T09:48:55.285Z] [2024-11-19 09:48:20.583161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:20.583269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:20.583330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:20.583353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:20.583378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:20.583394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:20.583416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:20.583432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:20.583453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:20.583469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:20.583492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:20.583507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:20.583538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:20.583554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:20.583575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.662 [2024-11-19 09:48:20.583591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:07.662 [2024-11-19 09:48:20.583613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.583629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.583650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.583693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.583718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.583734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.583756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.583785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.583806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.583821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.583856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.583871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.583891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.583906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.583926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.583941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.583961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.583976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.583998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.663 [2024-11-19 09:48:20.584631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.584675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.584715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.584764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.584801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.584853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.584889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.584925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.584961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.584982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.584998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.585018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.585048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.585069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.585084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.585120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.585135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.585156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.585171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.585192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.585207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:07.663 [2024-11-19 09:48:20.585240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-11-19 09:48:20.585275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.664 [2024-11-19 09:48:20.585312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.585369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.585410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.585447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.585484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.585522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.585559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.585596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.585633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.664 [2024-11-19 09:48:20.585677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.664 [2024-11-19 09:48:20.585714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.664 [2024-11-19 09:48:20.585759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.664 [2024-11-19 09:48:20.585813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.664 [2024-11-19 09:48:20.585850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.664 [2024-11-19 09:48:20.585886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.664 [2024-11-19 09:48:20.585922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.664 [2024-11-19 09:48:20.585958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.585979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.585994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:07.664 [2024-11-19 09:48:20.586611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.664 [2024-11-19 09:48:20.586628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.586651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.586667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.586688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.586712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.586735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.586751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.586773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.586804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.586825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.586840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.586861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.586877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.586898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.586913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.586939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.586956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.586977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.586993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.587029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.587065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.587118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.587155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.587193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.587269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.587307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.587345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.587384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.587422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.587459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.587496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.587534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.587571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.587608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.587645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.587682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.587731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.587776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.587813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.587864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.665 [2024-11-19 09:48:20.587900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.587937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.587959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.588000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.588021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.588036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.588057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.588072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.588108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.588130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.588151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.588166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.588187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.665 [2024-11-19 09:48:20.588203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:07.665 [2024-11-19 09:48:20.588937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.666 [2024-11-19 09:48:20.588965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:20.588998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:20.589016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:20.589044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:20.589059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:20.589087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:20.589118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:20.589146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:20.589162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:20.589190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:20.589206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:20.589234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:20.589249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:20.589278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:20.589293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:20.589357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:20.589378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.666 7717.78 IOPS, 30.15 MiB/s [2024-11-19T09:48:55.289Z] 7396.21 IOPS, 28.89 MiB/s [2024-11-19T09:48:55.289Z] 7100.36 IOPS, 27.74 MiB/s [2024-11-19T09:48:55.289Z] 6827.27 IOPS, 26.67 MiB/s [2024-11-19T09:48:55.289Z] 6574.41 IOPS, 25.68 MiB/s [2024-11-19T09:48:55.289Z] 6339.61 IOPS, 24.76 MiB/s [2024-11-19T09:48:55.289Z] 6121.00 IOPS, 23.91 MiB/s [2024-11-19T09:48:55.289Z] 6137.47 IOPS, 23.97 MiB/s [2024-11-19T09:48:55.289Z] 6207.87 IOPS, 24.25 MiB/s [2024-11-19T09:48:55.289Z] 6283.12 IOPS, 24.54 MiB/s [2024-11-19T09:48:55.289Z] 6355.27 IOPS, 24.83 MiB/s [2024-11-19T09:48:55.289Z] 6423.47 IOPS, 25.09 MiB/s [2024-11-19T09:48:55.289Z] 6491.66 IOPS, 25.36 MiB/s [2024-11-19T09:48:55.289Z] [2024-11-19 09:48:34.146068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.666 [2024-11-19 09:48:34.146772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.666 [2024-11-19 09:48:34.146799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.666 [2024-11-19 09:48:34.146833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.666 [2024-11-19 09:48:34.146861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.666 [2024-11-19 09:48:34.146888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.666 [2024-11-19 09:48:34.146915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.666 [2024-11-19 09:48:34.146941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.666 [2024-11-19 09:48:34.146968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.146982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.666 [2024-11-19 09:48:34.146994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.147008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.666 [2024-11-19 09:48:34.147021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.666 [2024-11-19 09:48:34.147035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.667 [2024-11-19 09:48:34.147877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.147977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.147989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.148003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.148016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.148030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.148042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.148056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.148069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.148083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.148095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.148109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.148122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.148136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.148148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.667 [2024-11-19 09:48:34.148162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.667 [2024-11-19 09:48:34.148175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.668 [2024-11-19 09:48:34.148516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.668 [2024-11-19 09:48:34.148543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.668 [2024-11-19 09:48:34.148570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.668 [2024-11-19 09:48:34.148602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.668 [2024-11-19 09:48:34.148630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.668 [2024-11-19 09:48:34.148657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.668 [2024-11-19 09:48:34.148683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.668 [2024-11-19 09:48:34.148711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.148979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.148991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.149005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.149017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.149031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.149044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.149058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.149070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.149083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.149096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.149110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.149122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.149136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.149149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.149163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.668 [2024-11-19 09:48:34.149176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.149190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.668 [2024-11-19 09:48:34.149202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.149245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.668 [2024-11-19 09:48:34.149259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.149274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.668 [2024-11-19 09:48:34.149287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.668 [2024-11-19 09:48:34.149302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.669 [2024-11-19 09:48:34.149315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.149340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.669 [2024-11-19 09:48:34.149354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.149369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.669 [2024-11-19 09:48:34.149382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.149396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.669 [2024-11-19 09:48:34.149409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.149423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224f290 is same with the state(6) to be set 00:21:07.669 [2024-11-19 09:48:34.149462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.149472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.149483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35608 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.149496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.149511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.149521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.149546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36216 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.149560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.149573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.149583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.149594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36224 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.149607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.149620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.149631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.149641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36232 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.149655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.149668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.149679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.149689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36240 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.149712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.149725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.149735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.149752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36248 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.149766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.149780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.149790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.149800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36256 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.149813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.149827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.149837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.149853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36264 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.149875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.149896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.149907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.149917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36272 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.149931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.149945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.149954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.149965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36280 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.149978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.149992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.150002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.150013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36288 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.150026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.150039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.150049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.150060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36296 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.150074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.150087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.150097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.150107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36304 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.150120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.150134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.150151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.150162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36312 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.150175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.150189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.150199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.150209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35616 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.150222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.150236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.150246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.150271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35624 len:8 PRP1 0x0 PRP2 0x0 00:21:07.669 [2024-11-19 09:48:34.150286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.669 [2024-11-19 09:48:34.150307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.669 [2024-11-19 09:48:34.150317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.669 [2024-11-19 09:48:34.150328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35632 len:8 PRP1 0x0 PRP2 0x0 00:21:07.670 [2024-11-19 09:48:34.150341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.670 [2024-11-19 09:48:34.150355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.670 [2024-11-19 09:48:34.150365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.670 [2024-11-19 09:48:34.150375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35640 len:8 PRP1 0x0 PRP2 0x0 00:21:07.670 [2024-11-19 09:48:34.150389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.670 [2024-11-19 09:48:34.150402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.670 [2024-11-19 09:48:34.150412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.670 [2024-11-19 09:48:34.150423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35648 len:8 PRP1 0x0 PRP2 0x0 00:21:07.670 [2024-11-19 09:48:34.150436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.670 [2024-11-19 09:48:34.150449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.670 [2024-11-19 09:48:34.150462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.670 [2024-11-19 09:48:34.150473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35656 len:8 PRP1 0x0 PRP2 0x0 00:21:07.670 [2024-11-19 09:48:34.150486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.670 [2024-11-19 09:48:34.150499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.670 [2024-11-19 09:48:34.150509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.670 [2024-11-19 09:48:34.150519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35664 len:8 PRP1 0x0 PRP2 0x0 00:21:07.670 [2024-11-19 09:48:34.150532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.670 [2024-11-19 09:48:34.150553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.670 [2024-11-19 09:48:34.150564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.670 [2024-11-19 09:48:34.150574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35672 len:8 PRP1 0x0 PRP2 0x0 00:21:07.670 [2024-11-19 09:48:34.150587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.670 [2024-11-19 09:48:34.150749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.670 [2024-11-19 09:48:34.150778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.670 [2024-11-19 09:48:34.150794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.670 [2024-11-19 09:48:34.150807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.670 [2024-11-19 09:48:34.150821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.670 [2024-11-19 09:48:34.150834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.670 [2024-11-19 09:48:34.150848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.670 [2024-11-19 09:48:34.150861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.670 [2024-11-19 09:48:34.150882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.670 [2024-11-19 09:48:34.150897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.670 [2024-11-19 09:48:34.150916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c01d0 is same with the state(6) to be set 00:21:07.670 [2024-11-19 09:48:34.152164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:07.670 [2024-11-19 09:48:34.152224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c01d0 (9): Bad file descriptor 00:21:07.670 [2024-11-19 09:48:34.152634] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.670 [2024-11-19 09:48:34.152668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c01d0 with addr=10.0.0.3, port=4421 00:21:07.670 [2024-11-19 09:48:34.152686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c01d0 is same with the state(6) to be set 00:21:07.670 [2024-11-19 09:48:34.152762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c01d0 (9): Bad file descriptor 00:21:07.670 [2024-11-19 09:48:34.152800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:07.670 [2024-11-19 09:48:34.152818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:07.670 [2024-11-19 09:48:34.152832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:07.670 [2024-11-19 09:48:34.152845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:07.670 [2024-11-19 09:48:34.152860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:07.670 6553.14 IOPS, 25.60 MiB/s [2024-11-19T09:48:55.293Z] 6618.78 IOPS, 25.85 MiB/s [2024-11-19T09:48:55.293Z] 6675.08 IOPS, 26.07 MiB/s [2024-11-19T09:48:55.293Z] 6737.36 IOPS, 26.32 MiB/s [2024-11-19T09:48:55.293Z] 6793.82 IOPS, 26.54 MiB/s [2024-11-19T09:48:55.293Z] 6821.68 IOPS, 26.65 MiB/s [2024-11-19T09:48:55.293Z] 6879.93 IOPS, 26.87 MiB/s [2024-11-19T09:48:55.293Z] 6928.30 IOPS, 27.06 MiB/s [2024-11-19T09:48:55.293Z] 6969.57 IOPS, 27.22 MiB/s [2024-11-19T09:48:55.293Z] 7012.24 IOPS, 27.39 MiB/s [2024-11-19T09:48:55.293Z] [2024-11-19 09:48:44.212195] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:07.670 7056.17 IOPS, 27.56 MiB/s [2024-11-19T09:48:55.293Z] 7100.70 IOPS, 27.74 MiB/s [2024-11-19T09:48:55.293Z] 7135.50 IOPS, 27.87 MiB/s [2024-11-19T09:48:55.293Z] 7167.51 IOPS, 28.00 MiB/s [2024-11-19T09:48:55.293Z] 7198.24 IOPS, 28.12 MiB/s [2024-11-19T09:48:55.293Z] 7225.10 IOPS, 28.22 MiB/s [2024-11-19T09:48:55.293Z] 7256.69 IOPS, 28.35 MiB/s [2024-11-19T09:48:55.293Z] 7282.57 IOPS, 28.45 MiB/s [2024-11-19T09:48:55.293Z] 7302.07 IOPS, 28.52 MiB/s [2024-11-19T09:48:55.293Z] 7319.95 IOPS, 28.59 MiB/s [2024-11-19T09:48:55.293Z] 7338.57 IOPS, 28.67 MiB/s [2024-11-19T09:48:55.293Z] Received shutdown signal, test time was about 56.075315 seconds 00:21:07.670 00:21:07.670 Latency(us) 00:21:07.670 [2024-11-19T09:48:55.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.670 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:07.670 Verification LBA range: start 0x0 length 0x4000 00:21:07.670 Nvme0n1 : 56.07 7339.20 28.67 0.00 0.00 17412.38 1102.20 7046430.72 00:21:07.670 [2024-11-19T09:48:55.293Z] =================================================================================================================== 00:21:07.670 [2024-11-19T09:48:55.293Z] Total : 7339.20 28.67 0.00 0.00 17412.38 1102.20 7046430.72 00:21:07.670 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:07.670 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:07.670 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:07.670 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:07.670 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:07.670 09:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.670 rmmod nvme_tcp 00:21:07.670 rmmod nvme_fabrics 00:21:07.670 rmmod nvme_keyring 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80906 ']' 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80906 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80906 ']' 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80906 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80906 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:07.670 killing process with pid 80906 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80906' 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80906 00:21:07.670 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80906 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:07.930 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:21:08.190 ************************************ 00:21:08.190 END TEST nvmf_host_multipath 00:21:08.190 00:21:08.190 real 1m1.876s 00:21:08.190 user 2m51.828s 00:21:08.190 sys 0m18.441s 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:08.190 ************************************ 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.190 ************************************ 00:21:08.190 START TEST nvmf_timeout 00:21:08.190 ************************************ 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:08.190 * Looking for test storage... 00:21:08.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:21:08.190 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:21:08.450 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:08.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.451 --rc genhtml_branch_coverage=1 00:21:08.451 --rc genhtml_function_coverage=1 00:21:08.451 --rc genhtml_legend=1 00:21:08.451 --rc geninfo_all_blocks=1 00:21:08.451 --rc geninfo_unexecuted_blocks=1 00:21:08.451 00:21:08.451 ' 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:08.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.451 --rc genhtml_branch_coverage=1 00:21:08.451 --rc genhtml_function_coverage=1 00:21:08.451 --rc genhtml_legend=1 00:21:08.451 --rc geninfo_all_blocks=1 00:21:08.451 --rc geninfo_unexecuted_blocks=1 00:21:08.451 00:21:08.451 ' 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:08.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.451 --rc genhtml_branch_coverage=1 00:21:08.451 --rc genhtml_function_coverage=1 00:21:08.451 --rc genhtml_legend=1 00:21:08.451 --rc geninfo_all_blocks=1 00:21:08.451 --rc geninfo_unexecuted_blocks=1 00:21:08.451 00:21:08.451 ' 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:08.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.451 --rc genhtml_branch_coverage=1 00:21:08.451 --rc genhtml_function_coverage=1 00:21:08.451 --rc genhtml_legend=1 00:21:08.451 --rc geninfo_all_blocks=1 00:21:08.451 --rc geninfo_unexecuted_blocks=1 00:21:08.451 00:21:08.451 ' 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:08.451 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:08.451 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:08.452 Cannot find device "nvmf_init_br" 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:08.452 Cannot find device "nvmf_init_br2" 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:08.452 Cannot find device "nvmf_tgt_br" 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:08.452 Cannot find device "nvmf_tgt_br2" 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:08.452 Cannot find device "nvmf_init_br" 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:08.452 Cannot find device "nvmf_init_br2" 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:08.452 Cannot find device "nvmf_tgt_br" 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:08.452 Cannot find device "nvmf_tgt_br2" 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:08.452 Cannot find device "nvmf_br" 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:21:08.452 09:48:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:08.452 Cannot find device "nvmf_init_if" 00:21:08.452 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:21:08.452 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:08.452 Cannot find device "nvmf_init_if2" 00:21:08.452 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:21:08.452 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:08.452 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.452 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:21:08.452 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:08.452 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.452 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:21:08.452 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:08.452 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:08.452 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:08.452 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:08.452 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:08.452 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:08.711 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:08.711 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:21:08.711 00:21:08.711 --- 10.0.0.3 ping statistics --- 00:21:08.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.711 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:08.711 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:08.711 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:21:08.711 00:21:08.711 --- 10.0.0.4 ping statistics --- 00:21:08.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.711 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:08.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:08.711 00:21:08.711 --- 10.0.0.1 ping statistics --- 00:21:08.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.711 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:08.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:21:08.711 00:21:08.711 --- 10.0.0.2 ping statistics --- 00:21:08.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.711 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82132 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82132 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82132 ']' 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.711 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:08.968 [2024-11-19 09:48:56.345183] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:08.968 [2024-11-19 09:48:56.345297] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.968 [2024-11-19 09:48:56.499794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:08.968 [2024-11-19 09:48:56.564102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.968 [2024-11-19 09:48:56.564170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.968 [2024-11-19 09:48:56.564184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.968 [2024-11-19 09:48:56.564195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.968 [2024-11-19 09:48:56.564204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.968 [2024-11-19 09:48:56.565466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.968 [2024-11-19 09:48:56.565477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.226 [2024-11-19 09:48:56.626459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:09.226 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.227 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:09.227 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:09.227 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.227 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:09.227 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.227 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.227 09:48:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:09.485 [2024-11-19 09:48:57.031300] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.485 09:48:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:10.053 Malloc0 00:21:10.053 09:48:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:10.312 09:48:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:10.571 09:48:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:10.571 [2024-11-19 09:48:58.192185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:10.831 09:48:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82178 00:21:10.831 09:48:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:10.831 09:48:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82178 /var/tmp/bdevperf.sock 00:21:10.831 09:48:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82178 ']' 00:21:10.831 09:48:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:10.831 09:48:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:10.831 09:48:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:10.831 09:48:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.831 09:48:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:10.831 [2024-11-19 09:48:58.262373] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:10.831 [2024-11-19 09:48:58.262460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82178 ] 00:21:10.831 [2024-11-19 09:48:58.407581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.089 [2024-11-19 09:48:58.471873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.089 [2024-11-19 09:48:58.525837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:11.089 09:48:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.089 09:48:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:11.089 09:48:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:11.348 09:48:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:11.915 NVMe0n1 00:21:11.915 09:48:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82194 00:21:11.915 09:48:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:11.915 09:48:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:11.915 Running I/O for 10 seconds... 00:21:12.854 09:49:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:13.135 7164.00 IOPS, 27.98 MiB/s [2024-11-19T09:49:00.758Z] [2024-11-19 09:49:00.562977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.135 [2024-11-19 09:49:00.563038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.135 [2024-11-19 09:49:00.563063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.135 [2024-11-19 09:49:00.563075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.135 [2024-11-19 09:49:00.563088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.135 [2024-11-19 09:49:00.563098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.135 [2024-11-19 09:49:00.563110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.135 [2024-11-19 09:49:00.563120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.135 [2024-11-19 09:49:00.563132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.135 [2024-11-19 09:49:00.563143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.135 [2024-11-19 09:49:00.563154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.135 [2024-11-19 09:49:00.563163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.135 [2024-11-19 09:49:00.563175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.136 [2024-11-19 09:49:00.563184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.136 [2024-11-19 09:49:00.563217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.136 [2024-11-19 09:49:00.563628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.136 [2024-11-19 09:49:00.563648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.136 [2024-11-19 09:49:00.563669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.136 [2024-11-19 09:49:00.563690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.136 [2024-11-19 09:49:00.563712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.136 [2024-11-19 09:49:00.563733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.136 [2024-11-19 09:49:00.563754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.136 [2024-11-19 09:49:00.563775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.563987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.563999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.564009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.564029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.136 [2024-11-19 09:49:00.564039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.136 [2024-11-19 09:49:00.564050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.137 [2024-11-19 09:49:00.564760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.137 [2024-11-19 09:49:00.564813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.137 [2024-11-19 09:49:00.564823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.564834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.564844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.564865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.564875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.564886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.564896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.564908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.564917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.564929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.564938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.564949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.564959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.564970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.564980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.564991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.565002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.565023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.565044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.565067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.565087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.565108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.138 [2024-11-19 09:49:00.565129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.138 [2024-11-19 09:49:00.565150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.138 [2024-11-19 09:49:00.565171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.138 [2024-11-19 09:49:00.565192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.138 [2024-11-19 09:49:00.565229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.138 [2024-11-19 09:49:00.565252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.138 [2024-11-19 09:49:00.565273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.138 [2024-11-19 09:49:00.565293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.565314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.565335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.565356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.565378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.565399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.565421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.138 [2024-11-19 09:49:00.565441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8a1d0 is same with the state(6) to be set 00:21:13.138 [2024-11-19 09:49:00.565464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.138 [2024-11-19 09:49:00.565472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.138 [2024-11-19 09:49:00.565481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69336 len:8 PRP1 0x0 PRP2 0x0 00:21:13.138 [2024-11-19 09:49:00.565491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.138 [2024-11-19 09:49:00.565509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.138 [2024-11-19 09:49:00.565517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69664 len:8 PRP1 0x0 PRP2 0x0 00:21:13.138 [2024-11-19 09:49:00.565527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.138 [2024-11-19 09:49:00.565549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.138 [2024-11-19 09:49:00.565558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69672 len:8 PRP1 0x0 PRP2 0x0 00:21:13.138 [2024-11-19 09:49:00.565567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.138 [2024-11-19 09:49:00.565584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.138 [2024-11-19 09:49:00.565592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69680 len:8 PRP1 0x0 PRP2 0x0 00:21:13.138 [2024-11-19 09:49:00.565602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.138 [2024-11-19 09:49:00.565611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.138 [2024-11-19 09:49:00.565619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.565627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69688 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.565636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.565646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.565654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.565662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69696 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.565672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.565681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.565689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.565697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69704 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.565706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.565716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.565723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.565731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69712 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.565740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.565750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.565757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.565765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69720 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.565774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.565784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.565791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.565799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69728 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.565808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.565818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.565830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.565839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69736 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.565848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.565858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.565865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.565873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69744 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.565882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.565892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.565899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.565907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69752 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.565917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.565926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.565934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.565941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69760 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.565951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.565960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.565967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.565975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69768 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.565985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.565994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.566001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.566009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69776 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.566018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.566028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.566035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.566043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69784 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.566054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.566063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.566071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.566079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69792 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.566088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.566098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.566110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.566118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69800 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.566127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.566137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.566144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.566152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69808 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.566161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.566171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.566178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.566186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69816 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.566196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.566215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.139 [2024-11-19 09:49:00.566224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.139 [2024-11-19 09:49:00.566232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69824 len:8 PRP1 0x0 PRP2 0x0 00:21:13.139 [2024-11-19 09:49:00.566241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.139 [2024-11-19 09:49:00.566252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.140 [2024-11-19 09:49:00.566259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.140 [2024-11-19 09:49:00.566267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69832 len:8 PRP1 0x0 PRP2 0x0 00:21:13.140 [2024-11-19 09:49:00.566276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.140 [2024-11-19 09:49:00.566286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.140 [2024-11-19 09:49:00.566293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.140 [2024-11-19 09:49:00.566301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69840 len:8 PRP1 0x0 PRP2 0x0 00:21:13.140 [2024-11-19 09:49:00.566311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.140 [2024-11-19 09:49:00.566320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.140 [2024-11-19 09:49:00.566327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.140 [2024-11-19 09:49:00.566335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69848 len:8 PRP1 0x0 PRP2 0x0 00:21:13.140 [2024-11-19 09:49:00.566353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.140 [2024-11-19 09:49:00.566493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.140 [2024-11-19 09:49:00.566521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.140 [2024-11-19 09:49:00.566533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.140 [2024-11-19 09:49:00.566543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.140 [2024-11-19 09:49:00.579444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.140 [2024-11-19 09:49:00.579490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.140 [2024-11-19 09:49:00.579507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.140 [2024-11-19 09:49:00.579520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.140 [2024-11-19 09:49:00.579534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ce50 is same with the state(6) to be set 00:21:13.140 [2024-11-19 09:49:00.579894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:13.140 [2024-11-19 09:49:00.579955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ce50 (9): Bad file descriptor 00:21:13.140 [2024-11-19 09:49:00.580122] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.140 [2024-11-19 09:49:00.580152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ce50 with addr=10.0.0.3, port=4420 00:21:13.140 [2024-11-19 09:49:00.580167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ce50 is same with the state(6) to be set 00:21:13.140 [2024-11-19 09:49:00.580189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ce50 (9): Bad file descriptor 00:21:13.140 [2024-11-19 09:49:00.580233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:13.140 [2024-11-19 09:49:00.580249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:13.140 [2024-11-19 09:49:00.580263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:13.140 [2024-11-19 09:49:00.580276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:13.140 [2024-11-19 09:49:00.580289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:13.140 09:49:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:15.014 4302.00 IOPS, 16.80 MiB/s [2024-11-19T09:49:02.637Z] 2868.00 IOPS, 11.20 MiB/s [2024-11-19T09:49:02.637Z] [2024-11-19 09:49:02.580418] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:15.014 [2024-11-19 09:49:02.580490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ce50 with addr=10.0.0.3, port=4420 00:21:15.014 [2024-11-19 09:49:02.580524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ce50 is same with the state(6) to be set 00:21:15.014 [2024-11-19 09:49:02.580550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ce50 (9): Bad file descriptor 00:21:15.014 [2024-11-19 09:49:02.580569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:15.014 [2024-11-19 09:49:02.580579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:15.014 [2024-11-19 09:49:02.580590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:15.014 [2024-11-19 09:49:02.580601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:15.014 [2024-11-19 09:49:02.580612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:15.014 09:49:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:15.014 09:49:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:15.014 09:49:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:15.273 09:49:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:15.273 09:49:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:15.273 09:49:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:15.273 09:49:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:15.840 09:49:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:15.840 09:49:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:16.776 2151.00 IOPS, 8.40 MiB/s [2024-11-19T09:49:04.658Z] 1720.80 IOPS, 6.72 MiB/s [2024-11-19T09:49:04.658Z] [2024-11-19 09:49:04.580874] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.035 [2024-11-19 09:49:04.580944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ce50 with addr=10.0.0.3, port=4420 00:21:17.035 [2024-11-19 09:49:04.580961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ce50 is same with the state(6) to be set 00:21:17.035 [2024-11-19 09:49:04.580989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ce50 (9): Bad file descriptor 00:21:17.035 [2024-11-19 09:49:04.581008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:17.035 [2024-11-19 09:49:04.581019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:17.035 [2024-11-19 09:49:04.581029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:17.035 [2024-11-19 09:49:04.581041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:17.035 [2024-11-19 09:49:04.581053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:18.908 1434.00 IOPS, 5.60 MiB/s [2024-11-19T09:49:06.790Z] 1229.14 IOPS, 4.80 MiB/s [2024-11-19T09:49:06.790Z] [2024-11-19 09:49:06.581201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:19.167 [2024-11-19 09:49:06.581266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:19.167 [2024-11-19 09:49:06.581294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:19.167 [2024-11-19 09:49:06.581304] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:21:19.167 [2024-11-19 09:49:06.581316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:20.103 1075.50 IOPS, 4.20 MiB/s 00:21:20.103 Latency(us) 00:21:20.103 [2024-11-19T09:49:07.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.103 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:20.103 Verification LBA range: start 0x0 length 0x4000 00:21:20.103 NVMe0n1 : 8.19 1050.12 4.10 15.62 0.00 120166.66 3455.53 7046430.72 00:21:20.103 [2024-11-19T09:49:07.726Z] =================================================================================================================== 00:21:20.103 [2024-11-19T09:49:07.726Z] Total : 1050.12 4.10 15.62 0.00 120166.66 3455.53 7046430.72 00:21:20.103 { 00:21:20.103 "results": [ 00:21:20.103 { 00:21:20.103 "job": "NVMe0n1", 00:21:20.103 "core_mask": "0x4", 00:21:20.103 "workload": "verify", 00:21:20.103 "status": "finished", 00:21:20.103 "verify_range": { 00:21:20.103 "start": 0, 00:21:20.103 "length": 16384 00:21:20.103 }, 00:21:20.103 "queue_depth": 128, 00:21:20.103 "io_size": 4096, 00:21:20.103 "runtime": 8.193334, 00:21:20.103 "iops": 1050.121965002281, 00:21:20.103 "mibps": 4.10203892579016, 00:21:20.103 "io_failed": 128, 00:21:20.103 "io_timeout": 0, 00:21:20.103 "avg_latency_us": 120166.6579935868, 00:21:20.103 "min_latency_us": 3455.5345454545454, 00:21:20.103 "max_latency_us": 7046430.72 00:21:20.103 } 00:21:20.103 ], 00:21:20.103 "core_count": 1 00:21:20.103 } 00:21:20.690 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:21:20.691 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:20.691 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:20.954 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:21:20.954 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:21:20.954 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:20.954 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:21.213 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:21:21.213 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82194 00:21:21.213 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82178 00:21:21.213 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82178 ']' 00:21:21.213 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82178 00:21:21.213 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:21.213 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.213 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82178 00:21:21.213 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:21.213 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:21.213 killing process with pid 82178 00:21:21.213 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82178' 00:21:21.213 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82178 00:21:21.213 Received shutdown signal, test time was about 9.435106 seconds 00:21:21.213 00:21:21.213 Latency(us) 00:21:21.213 [2024-11-19T09:49:08.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.213 [2024-11-19T09:49:08.836Z] =================================================================================================================== 00:21:21.213 [2024-11-19T09:49:08.836Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.213 09:49:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82178 00:21:21.471 09:49:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:21.730 [2024-11-19 09:49:09.248967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:21.730 09:49:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82317 00:21:21.730 09:49:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:21.730 09:49:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82317 /var/tmp/bdevperf.sock 00:21:21.730 09:49:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82317 ']' 00:21:21.730 09:49:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.730 09:49:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.730 09:49:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.730 09:49:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.730 09:49:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:21.730 [2024-11-19 09:49:09.322344] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:21.730 [2024-11-19 09:49:09.322449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82317 ] 00:21:21.988 [2024-11-19 09:49:09.471489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.988 [2024-11-19 09:49:09.534990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.988 [2024-11-19 09:49:09.590455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:22.923 09:49:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.923 09:49:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:22.923 09:49:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:23.182 09:49:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:21:23.440 NVMe0n1 00:21:23.440 09:49:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82341 00:21:23.440 09:49:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:23.440 09:49:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:21:23.699 Running I/O for 10 seconds... 00:21:24.635 09:49:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:24.897 8017.00 IOPS, 31.32 MiB/s [2024-11-19T09:49:12.520Z] [2024-11-19 09:49:12.320637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.897 [2024-11-19 09:49:12.320681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.897 [2024-11-19 09:49:12.320703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.897 [2024-11-19 09:49:12.320714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.897 [2024-11-19 09:49:12.320725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.897 [2024-11-19 09:49:12.320735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.897 [2024-11-19 09:49:12.320746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.897 [2024-11-19 09:49:12.320755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.897 [2024-11-19 09:49:12.320766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.897 [2024-11-19 09:49:12.320775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.897 [2024-11-19 09:49:12.320786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.320795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.320806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.320815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.320826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.320835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.320846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.320872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.320883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.320892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.320904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.320913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.320924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.320933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.320944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.320953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.320972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.320981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.320992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.898 [2024-11-19 09:49:12.321044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.898 [2024-11-19 09:49:12.321065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.898 [2024-11-19 09:49:12.321086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.898 [2024-11-19 09:49:12.321106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.898 [2024-11-19 09:49:12.321127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.898 [2024-11-19 09:49:12.321147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.898 [2024-11-19 09:49:12.321167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.898 [2024-11-19 09:49:12.321216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.898 [2024-11-19 09:49:12.321573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.898 [2024-11-19 09:49:12.321585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.899 [2024-11-19 09:49:12.321594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.899 [2024-11-19 09:49:12.321629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.899 [2024-11-19 09:49:12.321649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.899 [2024-11-19 09:49:12.321670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.899 [2024-11-19 09:49:12.321690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.899 [2024-11-19 09:49:12.321710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.899 [2024-11-19 09:49:12.321730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.899 [2024-11-19 09:49:12.321750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.321770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.321790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.321827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.321847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.321868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.321888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.321908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.321929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.321950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.321971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.321982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.321991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.322002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.322019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.322031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.322040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.322051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.322061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.322072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.322081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.322093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.322102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.322114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.322124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.322136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.322146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.322157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.322167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.322178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.899 [2024-11-19 09:49:12.322187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.322198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.899 [2024-11-19 09:49:12.322207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.322219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.899 [2024-11-19 09:49:12.322228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.322248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.899 [2024-11-19 09:49:12.322259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.322271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.899 [2024-11-19 09:49:12.322280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.899 [2024-11-19 09:49:12.322291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.900 [2024-11-19 09:49:12.322559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.900 [2024-11-19 09:49:12.322580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.900 [2024-11-19 09:49:12.322600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.900 [2024-11-19 09:49:12.322621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.900 [2024-11-19 09:49:12.322642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.900 [2024-11-19 09:49:12.322662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.900 [2024-11-19 09:49:12.322682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.900 [2024-11-19 09:49:12.322707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.900 [2024-11-19 09:49:12.322728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.900 [2024-11-19 09:49:12.322748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.900 [2024-11-19 09:49:12.322769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.900 [2024-11-19 09:49:12.322790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.900 [2024-11-19 09:49:12.322810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.900 [2024-11-19 09:49:12.322831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.900 [2024-11-19 09:49:12.322966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.900 [2024-11-19 09:49:12.322975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.322986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa331d0 is same with the state(6) to be set 00:21:24.901 [2024-11-19 09:49:12.322998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76608 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77208 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77216 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77224 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77232 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77240 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77248 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77256 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77264 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77272 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77280 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77288 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77296 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77304 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77312 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76616 len:8 PRP1 0x0 PRP2 0x0 00:21:24.901 [2024-11-19 09:49:12.323595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.901 [2024-11-19 09:49:12.323605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.901 [2024-11-19 09:49:12.323612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.901 [2024-11-19 09:49:12.323620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76624 len:8 PRP1 0x0 PRP2 0x0 00:21:24.902 [2024-11-19 09:49:12.323629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.902 [2024-11-19 09:49:12.323643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.902 [2024-11-19 09:49:12.323651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.902 [2024-11-19 09:49:12.323659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76632 len:8 PRP1 0x0 PRP2 0x0 00:21:24.902 [2024-11-19 09:49:12.323668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.902 [2024-11-19 09:49:12.323677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.902 [2024-11-19 09:49:12.323684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.902 [2024-11-19 09:49:12.323692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76640 len:8 PRP1 0x0 PRP2 0x0 00:21:24.902 [2024-11-19 09:49:12.323701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.902 [2024-11-19 09:49:12.323710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.902 [2024-11-19 09:49:12.323718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.902 [2024-11-19 09:49:12.323726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76648 len:8 PRP1 0x0 PRP2 0x0 00:21:24.902 [2024-11-19 09:49:12.323735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.902 [2024-11-19 09:49:12.323745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.902 [2024-11-19 09:49:12.323752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.902 [2024-11-19 09:49:12.323775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76656 len:8 PRP1 0x0 PRP2 0x0 00:21:24.902 [2024-11-19 09:49:12.323784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.902 [2024-11-19 09:49:12.337270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.902 [2024-11-19 09:49:12.337307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.902 [2024-11-19 09:49:12.337324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76664 len:8 PRP1 0x0 PRP2 0x0 00:21:24.902 [2024-11-19 09:49:12.337338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.902 [2024-11-19 09:49:12.337352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.902 [2024-11-19 09:49:12.337363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.902 [2024-11-19 09:49:12.337374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76672 len:8 PRP1 0x0 PRP2 0x0 00:21:24.902 [2024-11-19 09:49:12.337388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.902 [2024-11-19 09:49:12.337598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.902 [2024-11-19 09:49:12.337631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.902 [2024-11-19 09:49:12.337650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.902 [2024-11-19 09:49:12.337663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.902 [2024-11-19 09:49:12.337677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.902 [2024-11-19 09:49:12.337689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.902 [2024-11-19 09:49:12.337703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.902 [2024-11-19 09:49:12.337728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.902 [2024-11-19 09:49:12.337738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5e50 is same with the state(6) to be set 00:21:24.902 [2024-11-19 09:49:12.337961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:24.902 [2024-11-19 09:49:12.337989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5e50 (9): Bad file descriptor 00:21:24.902 [2024-11-19 09:49:12.338089] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.902 [2024-11-19 09:49:12.338111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5e50 with addr=10.0.0.3, port=4420 00:21:24.902 [2024-11-19 09:49:12.338122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5e50 is same with the state(6) to be set 00:21:24.902 [2024-11-19 09:49:12.338140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5e50 (9): Bad file descriptor 00:21:24.902 [2024-11-19 09:49:12.338155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:24.902 [2024-11-19 09:49:12.338164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:24.902 [2024-11-19 09:49:12.338175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:24.902 [2024-11-19 09:49:12.338184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:24.902 [2024-11-19 09:49:12.338194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:24.902 09:49:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:21:25.837 4768.50 IOPS, 18.63 MiB/s [2024-11-19T09:49:13.460Z] [2024-11-19 09:49:13.338344] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.837 [2024-11-19 09:49:13.338403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5e50 with addr=10.0.0.3, port=4420 00:21:25.837 [2024-11-19 09:49:13.338424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5e50 is same with the state(6) to be set 00:21:25.837 [2024-11-19 09:49:13.338454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5e50 (9): Bad file descriptor 00:21:25.837 [2024-11-19 09:49:13.338474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:25.837 [2024-11-19 09:49:13.338485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:25.837 [2024-11-19 09:49:13.338496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:25.837 [2024-11-19 09:49:13.338507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:25.837 [2024-11-19 09:49:13.338519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:25.837 09:49:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:26.095 [2024-11-19 09:49:13.667646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:26.095 09:49:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82341 00:21:26.948 3179.00 IOPS, 12.42 MiB/s [2024-11-19T09:49:14.571Z] [2024-11-19 09:49:14.350377] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:28.815 2384.25 IOPS, 9.31 MiB/s [2024-11-19T09:49:17.372Z] 3221.00 IOPS, 12.58 MiB/s [2024-11-19T09:49:18.306Z] 4217.50 IOPS, 16.47 MiB/s [2024-11-19T09:49:19.242Z] 4920.14 IOPS, 19.22 MiB/s [2024-11-19T09:49:20.175Z] 5453.12 IOPS, 21.30 MiB/s [2024-11-19T09:49:21.651Z] 5878.33 IOPS, 22.96 MiB/s [2024-11-19T09:49:21.652Z] 6215.30 IOPS, 24.28 MiB/s 00:21:34.029 Latency(us) 00:21:34.029 [2024-11-19T09:49:21.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.029 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:34.029 Verification LBA range: start 0x0 length 0x4000 00:21:34.029 NVMe0n1 : 10.01 6219.14 24.29 0.00 0.00 20541.06 2353.34 3035150.89 00:21:34.029 [2024-11-19T09:49:21.652Z] =================================================================================================================== 00:21:34.029 [2024-11-19T09:49:21.652Z] Total : 6219.14 24.29 0.00 0.00 20541.06 2353.34 3035150.89 00:21:34.029 { 00:21:34.029 "results": [ 00:21:34.029 { 00:21:34.029 "job": "NVMe0n1", 00:21:34.029 "core_mask": "0x4", 00:21:34.029 "workload": "verify", 00:21:34.029 "status": "finished", 00:21:34.029 "verify_range": { 00:21:34.029 "start": 0, 00:21:34.029 "length": 16384 00:21:34.029 }, 00:21:34.029 "queue_depth": 128, 00:21:34.029 "io_size": 4096, 00:21:34.029 "runtime": 10.010554, 00:21:34.029 "iops": 6219.136323524152, 00:21:34.029 "mibps": 24.29350126376622, 00:21:34.029 "io_failed": 0, 00:21:34.029 "io_timeout": 0, 00:21:34.029 "avg_latency_us": 20541.057061827294, 00:21:34.029 "min_latency_us": 2353.338181818182, 00:21:34.029 "max_latency_us": 3035150.8945454545 00:21:34.029 } 00:21:34.029 ], 00:21:34.029 "core_count": 1 00:21:34.029 } 00:21:34.029 09:49:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82446 00:21:34.029 09:49:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:34.029 09:49:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:21:34.029 Running I/O for 10 seconds... 00:21:34.596 09:49:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:34.857 6806.00 IOPS, 26.59 MiB/s [2024-11-19T09:49:22.480Z] [2024-11-19 09:49:22.465706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.857 [2024-11-19 09:49:22.465761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.857 [2024-11-19 09:49:22.465776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.857 [2024-11-19 09:49:22.465786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.857 [2024-11-19 09:49:22.465796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.857 [2024-11-19 09:49:22.465805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.857 [2024-11-19 09:49:22.465815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.857 [2024-11-19 09:49:22.465824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.857 [2024-11-19 09:49:22.465833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5e50 is same with the state(6) to be set 00:21:34.857 [2024-11-19 09:49:22.466092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.857 [2024-11-19 09:49:22.466111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.857 [2024-11-19 09:49:22.466131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.857 [2024-11-19 09:49:22.466142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.857 [2024-11-19 09:49:22.466157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.857 [2024-11-19 09:49:22.466167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.857 [2024-11-19 09:49:22.466178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.857 [2024-11-19 09:49:22.466188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.857 [2024-11-19 09:49:22.466199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.857 [2024-11-19 09:49:22.466223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.857 [2024-11-19 09:49:22.466237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.857 [2024-11-19 09:49:22.466246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.857 [2024-11-19 09:49:22.466258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.857 [2024-11-19 09:49:22.466267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.857 [2024-11-19 09:49:22.466278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.857 [2024-11-19 09:49:22.466288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.857 [2024-11-19 09:49:22.466299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.857 [2024-11-19 09:49:22.466308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.857 [2024-11-19 09:49:22.466319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.857 [2024-11-19 09:49:22.466328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.857 [2024-11-19 09:49:22.466339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.466982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.466993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.467002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.467013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.467022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.467033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.467044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.467055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.467064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.467075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.467084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.467095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.467104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.467116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.467125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.858 [2024-11-19 09:49:22.467136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.858 [2024-11-19 09:49:22.467145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.467981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.859 [2024-11-19 09:49:22.467991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.859 [2024-11-19 09:49:22.468001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.860 [2024-11-19 09:49:22.468758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.860 [2024-11-19 09:49:22.468778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.860 [2024-11-19 09:49:22.468789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa34290 is same with the state(6) to be set 00:21:34.860 [2024-11-19 09:49:22.468800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.860 [2024-11-19 09:49:22.468807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.860 [2024-11-19 09:49:22.468815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:21:34.860 [2024-11-19 09:49:22.468825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.119 [2024-11-19 09:49:22.469091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:35.119 [2024-11-19 09:49:22.469120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5e50 (9): Bad file descriptor 00:21:35.119 [2024-11-19 09:49:22.469231] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.119 [2024-11-19 09:49:22.469254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5e50 with addr=10.0.0.3, port=4420 00:21:35.119 [2024-11-19 09:49:22.469265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5e50 is same with the state(6) to be set 00:21:35.119 [2024-11-19 09:49:22.469283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5e50 (9): Bad file descriptor 00:21:35.119 [2024-11-19 09:49:22.469299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:35.119 [2024-11-19 09:49:22.469308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:35.119 [2024-11-19 09:49:22.469319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:35.119 [2024-11-19 09:49:22.469329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:35.119 [2024-11-19 09:49:22.469339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:35.119 09:49:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:21:36.056 3850.50 IOPS, 15.04 MiB/s [2024-11-19T09:49:23.679Z] [2024-11-19 09:49:23.469482] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.056 [2024-11-19 09:49:23.469571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5e50 with addr=10.0.0.3, port=4420 00:21:36.056 [2024-11-19 09:49:23.469602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5e50 is same with the state(6) to be set 00:21:36.056 [2024-11-19 09:49:23.469628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5e50 (9): Bad file descriptor 00:21:36.056 [2024-11-19 09:49:23.469647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:36.056 [2024-11-19 09:49:23.469657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:36.056 [2024-11-19 09:49:23.469668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:36.056 [2024-11-19 09:49:23.469679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:36.056 [2024-11-19 09:49:23.469690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:36.990 2567.00 IOPS, 10.03 MiB/s [2024-11-19T09:49:24.613Z] [2024-11-19 09:49:24.469828] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.990 [2024-11-19 09:49:24.469890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5e50 with addr=10.0.0.3, port=4420 00:21:36.990 [2024-11-19 09:49:24.469907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5e50 is same with the state(6) to be set 00:21:36.990 [2024-11-19 09:49:24.469932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5e50 (9): Bad file descriptor 00:21:36.990 [2024-11-19 09:49:24.469952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:36.990 [2024-11-19 09:49:24.469962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:36.990 [2024-11-19 09:49:24.469972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:36.990 [2024-11-19 09:49:24.469984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:36.990 [2024-11-19 09:49:24.469995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:37.927 1925.25 IOPS, 7.52 MiB/s [2024-11-19T09:49:25.550Z] [2024-11-19 09:49:25.474019] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.927 [2024-11-19 09:49:25.474103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5e50 with addr=10.0.0.3, port=4420 00:21:37.927 [2024-11-19 09:49:25.474120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5e50 is same with the state(6) to be set 00:21:37.927 [2024-11-19 09:49:25.474393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5e50 (9): Bad file descriptor 00:21:37.927 [2024-11-19 09:49:25.474643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:37.927 [2024-11-19 09:49:25.474662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:37.927 [2024-11-19 09:49:25.474674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:37.927 [2024-11-19 09:49:25.474685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:37.927 [2024-11-19 09:49:25.474697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:37.927 09:49:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:38.185 [2024-11-19 09:49:25.732231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:38.185 09:49:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82446 00:21:39.012 1540.20 IOPS, 6.02 MiB/s [2024-11-19T09:49:26.635Z] [2024-11-19 09:49:26.503920] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:21:40.907 2579.33 IOPS, 10.08 MiB/s [2024-11-19T09:49:29.467Z] 3536.57 IOPS, 13.81 MiB/s [2024-11-19T09:49:30.403Z] 4271.50 IOPS, 16.69 MiB/s [2024-11-19T09:49:31.780Z] 4841.33 IOPS, 18.91 MiB/s [2024-11-19T09:49:31.780Z] 5302.80 IOPS, 20.71 MiB/s 00:21:44.157 Latency(us) 00:21:44.157 [2024-11-19T09:49:31.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.157 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:44.157 Verification LBA range: start 0x0 length 0x4000 00:21:44.157 NVMe0n1 : 10.01 5308.61 20.74 3633.46 0.00 14286.70 677.70 3019898.88 00:21:44.157 [2024-11-19T09:49:31.780Z] =================================================================================================================== 00:21:44.157 [2024-11-19T09:49:31.780Z] Total : 5308.61 20.74 3633.46 0.00 14286.70 0.00 3019898.88 00:21:44.157 { 00:21:44.157 "results": [ 00:21:44.157 { 00:21:44.157 "job": "NVMe0n1", 00:21:44.157 "core_mask": "0x4", 00:21:44.157 "workload": "verify", 00:21:44.157 "status": "finished", 00:21:44.157 "verify_range": { 00:21:44.157 "start": 0, 00:21:44.157 "length": 16384 00:21:44.157 }, 00:21:44.157 "queue_depth": 128, 00:21:44.157 "io_size": 4096, 00:21:44.157 "runtime": 10.00864, 00:21:44.157 "iops": 5308.6133580586375, 00:21:44.157 "mibps": 20.736770929916553, 00:21:44.157 "io_failed": 36366, 00:21:44.157 "io_timeout": 0, 00:21:44.157 "avg_latency_us": 14286.701260485252, 00:21:44.157 "min_latency_us": 677.7018181818182, 00:21:44.157 "max_latency_us": 3019898.88 00:21:44.157 } 00:21:44.157 ], 00:21:44.157 "core_count": 1 00:21:44.157 } 00:21:44.157 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82317 00:21:44.157 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82317 ']' 00:21:44.157 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82317 00:21:44.157 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:44.157 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.157 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82317 00:21:44.157 killing process with pid 82317 00:21:44.157 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.157 00:21:44.157 Latency(us) 00:21:44.157 [2024-11-19T09:49:31.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.158 [2024-11-19T09:49:31.781Z] =================================================================================================================== 00:21:44.158 [2024-11-19T09:49:31.781Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.158 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:44.158 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:44.158 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82317' 00:21:44.158 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82317 00:21:44.158 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82317 00:21:44.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.158 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82555 00:21:44.158 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82555 /var/tmp/bdevperf.sock 00:21:44.158 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:21:44.158 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82555 ']' 00:21:44.158 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.158 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.158 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.158 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.158 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:44.158 [2024-11-19 09:49:31.657290] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:44.158 [2024-11-19 09:49:31.657388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82555 ] 00:21:44.417 [2024-11-19 09:49:31.802201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.417 [2024-11-19 09:49:31.858111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.417 [2024-11-19 09:49:31.911588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:44.417 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.417 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:44.417 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82568 00:21:44.417 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:44.417 09:49:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82555 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:44.678 09:49:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:44.940 NVMe0n1 00:21:44.940 09:49:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82605 00:21:44.940 09:49:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:44.940 09:49:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:45.201 Running I/O for 10 seconds... 00:21:46.150 09:49:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:46.410 14482.00 IOPS, 56.57 MiB/s [2024-11-19T09:49:34.033Z] [2024-11-19 09:49:33.816646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.816696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.816736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.816747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.816758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.816767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.816778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.816787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.816798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.816807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.816817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.816826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.816836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.816845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.816855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.816864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.816874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.816883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.816894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.816902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.816913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.816922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.816932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.816942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.816952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.816961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.816973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.816982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.816993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.817001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.817012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.410 [2024-11-19 09:49:33.817022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.410 [2024-11-19 09:49:33.817033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.411 [2024-11-19 09:49:33.817913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.411 [2024-11-19 09:49:33.817936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.817946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.817954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.817964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.817973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.817983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.817992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.412 [2024-11-19 09:49:33.818692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.412 [2024-11-19 09:49:33.818702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.818710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.818720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.818729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.818746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.818755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.818764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.818773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.818782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.818808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.818818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.818826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.818836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.818845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.818855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.818863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.818873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.818882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.818892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.818900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.818915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.818924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.818934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.818943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.818953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.818962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.818972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.818980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.818991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.413 [2024-11-19 09:49:33.819390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x886090 is same with the state(6) to be set 00:21:46.413 [2024-11-19 09:49:33.819414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.413 [2024-11-19 09:49:33.819421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.413 [2024-11-19 09:49:33.819431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77240 len:8 PRP1 0x0 PRP2 0x0 00:21:46.413 [2024-11-19 09:49:33.819445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.413 [2024-11-19 09:49:33.819808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:46.413 [2024-11-19 09:49:33.819952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x818e50 (9): Bad file descriptor 00:21:46.413 [2024-11-19 09:49:33.820065] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.413 [2024-11-19 09:49:33.820088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x818e50 with addr=10.0.0.3, port=4420 00:21:46.413 [2024-11-19 09:49:33.820100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x818e50 is same with the state(6) to be set 00:21:46.413 [2024-11-19 09:49:33.820117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x818e50 (9): Bad file descriptor 00:21:46.413 [2024-11-19 09:49:33.820133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:46.413 [2024-11-19 09:49:33.820143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:46.413 [2024-11-19 09:49:33.820153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:46.413 [2024-11-19 09:49:33.820165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:46.413 [2024-11-19 09:49:33.820176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:46.413 09:49:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82605 00:21:48.286 8320.50 IOPS, 32.50 MiB/s [2024-11-19T09:49:35.909Z] 5547.00 IOPS, 21.67 MiB/s [2024-11-19T09:49:35.909Z] [2024-11-19 09:49:35.820416] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.286 [2024-11-19 09:49:35.820508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x818e50 with addr=10.0.0.3, port=4420 00:21:48.286 [2024-11-19 09:49:35.820526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x818e50 is same with the state(6) to be set 00:21:48.286 [2024-11-19 09:49:35.820552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x818e50 (9): Bad file descriptor 00:21:48.286 [2024-11-19 09:49:35.820573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:48.286 [2024-11-19 09:49:35.820584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:48.286 [2024-11-19 09:49:35.820595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:48.286 [2024-11-19 09:49:35.820606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:48.286 [2024-11-19 09:49:35.820617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:50.159 4160.25 IOPS, 16.25 MiB/s [2024-11-19T09:49:38.042Z] 3328.20 IOPS, 13.00 MiB/s [2024-11-19T09:49:38.042Z] [2024-11-19 09:49:37.820858] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.419 [2024-11-19 09:49:37.820952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x818e50 with addr=10.0.0.3, port=4420 00:21:50.419 [2024-11-19 09:49:37.820970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x818e50 is same with the state(6) to be set 00:21:50.419 [2024-11-19 09:49:37.820998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x818e50 (9): Bad file descriptor 00:21:50.419 [2024-11-19 09:49:37.821018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:50.419 [2024-11-19 09:49:37.821029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:50.419 [2024-11-19 09:49:37.821040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:50.419 [2024-11-19 09:49:37.821051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:50.419 [2024-11-19 09:49:37.821062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:52.305 2773.50 IOPS, 10.83 MiB/s [2024-11-19T09:49:39.928Z] 2377.29 IOPS, 9.29 MiB/s [2024-11-19T09:49:39.928Z] [2024-11-19 09:49:39.821157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:52.305 [2024-11-19 09:49:39.821246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:52.305 [2024-11-19 09:49:39.821261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:52.305 [2024-11-19 09:49:39.821271] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:21:52.305 [2024-11-19 09:49:39.821283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:53.243 2080.12 IOPS, 8.13 MiB/s 00:21:53.243 Latency(us) 00:21:53.243 [2024-11-19T09:49:40.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.243 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:53.243 NVMe0n1 : 8.15 2041.51 7.97 15.70 0.00 62169.28 1563.93 7015926.69 00:21:53.243 [2024-11-19T09:49:40.866Z] =================================================================================================================== 00:21:53.243 [2024-11-19T09:49:40.866Z] Total : 2041.51 7.97 15.70 0.00 62169.28 1563.93 7015926.69 00:21:53.243 { 00:21:53.243 "results": [ 00:21:53.243 { 00:21:53.243 "job": "NVMe0n1", 00:21:53.243 "core_mask": "0x4", 00:21:53.243 "workload": "randread", 00:21:53.243 "status": "finished", 00:21:53.243 "queue_depth": 128, 00:21:53.243 "io_size": 4096, 00:21:53.243 "runtime": 8.151323, 00:21:53.243 "iops": 2041.5090899968018, 00:21:53.243 "mibps": 7.974644882800007, 00:21:53.243 "io_failed": 128, 00:21:53.243 "io_timeout": 0, 00:21:53.243 "avg_latency_us": 62169.28069912556, 00:21:53.243 "min_latency_us": 1563.9272727272728, 00:21:53.243 "max_latency_us": 7015926.69090909 00:21:53.243 } 00:21:53.243 ], 00:21:53.243 "core_count": 1 00:21:53.243 } 00:21:53.243 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:53.243 Attaching 5 probes... 00:21:53.243 1300.857833: reset bdev controller NVMe0 00:21:53.243 1301.060901: reconnect bdev controller NVMe0 00:21:53.243 3301.285530: reconnect delay bdev controller NVMe0 00:21:53.243 3301.335236: reconnect bdev controller NVMe0 00:21:53.243 5301.753964: reconnect delay bdev controller NVMe0 00:21:53.243 5301.780768: reconnect bdev controller NVMe0 00:21:53.243 7302.171486: reconnect delay bdev controller NVMe0 00:21:53.243 7302.195887: reconnect bdev controller NVMe0 00:21:53.243 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:53.243 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:53.244 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82568 00:21:53.244 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:53.244 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82555 00:21:53.244 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82555 ']' 00:21:53.244 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82555 00:21:53.244 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:53.244 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.244 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82555 00:21:53.503 killing process with pid 82555 00:21:53.503 Received shutdown signal, test time was about 8.220409 seconds 00:21:53.503 00:21:53.503 Latency(us) 00:21:53.503 [2024-11-19T09:49:41.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.503 [2024-11-19T09:49:41.126Z] =================================================================================================================== 00:21:53.503 [2024-11-19T09:49:41.126Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:53.503 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:53.503 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:53.503 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82555' 00:21:53.503 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82555 00:21:53.503 09:49:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82555 00:21:53.503 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:54.070 rmmod nvme_tcp 00:21:54.070 rmmod nvme_fabrics 00:21:54.070 rmmod nvme_keyring 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82132 ']' 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82132 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82132 ']' 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82132 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82132 00:21:54.070 killing process with pid 82132 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82132' 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82132 00:21:54.070 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82132 00:21:54.329 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:54.330 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:54.589 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:54.589 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:54.589 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.589 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.590 09:49:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.590 09:49:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:21:54.590 00:21:54.590 real 0m46.372s 00:21:54.590 user 2m15.997s 00:21:54.590 sys 0m5.655s 00:21:54.590 09:49:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.590 ************************************ 00:21:54.590 09:49:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:54.590 END TEST nvmf_timeout 00:21:54.590 ************************************ 00:21:54.590 09:49:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:21:54.590 09:49:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:54.590 00:21:54.590 real 5m9.817s 00:21:54.590 user 13m29.712s 00:21:54.590 sys 1m9.342s 00:21:54.590 09:49:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.590 09:49:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.590 ************************************ 00:21:54.590 END TEST nvmf_host 00:21:54.590 ************************************ 00:21:54.590 09:49:42 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:21:54.590 09:49:42 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:21:54.590 00:21:54.590 real 12m59.859s 00:21:54.590 user 31m19.261s 00:21:54.590 sys 3m9.677s 00:21:54.590 09:49:42 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.590 09:49:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:54.590 ************************************ 00:21:54.590 END TEST nvmf_tcp 00:21:54.590 ************************************ 00:21:54.590 09:49:42 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:21:54.590 09:49:42 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:54.590 09:49:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:54.590 09:49:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.590 09:49:42 -- common/autotest_common.sh@10 -- # set +x 00:21:54.590 ************************************ 00:21:54.590 START TEST nvmf_dif 00:21:54.590 ************************************ 00:21:54.590 09:49:42 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:54.850 * Looking for test storage... 00:21:54.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:54.850 09:49:42 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:54.850 09:49:42 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:21:54.850 09:49:42 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:54.850 09:49:42 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:21:54.850 09:49:42 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.850 09:49:42 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:54.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.850 --rc genhtml_branch_coverage=1 00:21:54.850 --rc genhtml_function_coverage=1 00:21:54.850 --rc genhtml_legend=1 00:21:54.850 --rc geninfo_all_blocks=1 00:21:54.850 --rc geninfo_unexecuted_blocks=1 00:21:54.850 00:21:54.850 ' 00:21:54.850 09:49:42 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:54.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.850 --rc genhtml_branch_coverage=1 00:21:54.850 --rc genhtml_function_coverage=1 00:21:54.850 --rc genhtml_legend=1 00:21:54.850 --rc geninfo_all_blocks=1 00:21:54.850 --rc geninfo_unexecuted_blocks=1 00:21:54.850 00:21:54.850 ' 00:21:54.850 09:49:42 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:54.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.850 --rc genhtml_branch_coverage=1 00:21:54.850 --rc genhtml_function_coverage=1 00:21:54.850 --rc genhtml_legend=1 00:21:54.850 --rc geninfo_all_blocks=1 00:21:54.850 --rc geninfo_unexecuted_blocks=1 00:21:54.850 00:21:54.850 ' 00:21:54.850 09:49:42 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:54.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.850 --rc genhtml_branch_coverage=1 00:21:54.850 --rc genhtml_function_coverage=1 00:21:54.850 --rc genhtml_legend=1 00:21:54.850 --rc geninfo_all_blocks=1 00:21:54.850 --rc geninfo_unexecuted_blocks=1 00:21:54.850 00:21:54.850 ' 00:21:54.850 09:49:42 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.850 09:49:42 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.850 09:49:42 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.850 09:49:42 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.850 09:49:42 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.850 09:49:42 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:21:54.850 09:49:42 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.850 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.850 09:49:42 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:21:54.850 09:49:42 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:54.850 09:49:42 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:54.850 09:49:42 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:21:54.850 09:49:42 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.850 09:49:42 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:54.850 09:49:42 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:54.850 09:49:42 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:54.851 Cannot find device "nvmf_init_br" 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@162 -- # true 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:54.851 Cannot find device "nvmf_init_br2" 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@163 -- # true 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:54.851 Cannot find device "nvmf_tgt_br" 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@164 -- # true 00:21:54.851 09:49:42 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:55.109 Cannot find device "nvmf_tgt_br2" 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@165 -- # true 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:55.109 Cannot find device "nvmf_init_br" 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@166 -- # true 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:55.109 Cannot find device "nvmf_init_br2" 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@167 -- # true 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:55.109 Cannot find device "nvmf_tgt_br" 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@168 -- # true 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:55.109 Cannot find device "nvmf_tgt_br2" 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@169 -- # true 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:55.109 Cannot find device "nvmf_br" 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@170 -- # true 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:55.109 Cannot find device "nvmf_init_if" 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@171 -- # true 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:55.109 Cannot find device "nvmf_init_if2" 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@172 -- # true 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:55.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@173 -- # true 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:55.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@174 -- # true 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:55.109 09:49:42 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:55.369 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:55.369 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:21:55.369 00:21:55.369 --- 10.0.0.3 ping statistics --- 00:21:55.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.369 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:55.369 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:55.369 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:21:55.369 00:21:55.369 --- 10.0.0.4 ping statistics --- 00:21:55.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.369 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:55.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:21:55.369 00:21:55.369 --- 10.0.0.1 ping statistics --- 00:21:55.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.369 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:55.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:21:55.369 00:21:55.369 --- 10.0.0.2 ping statistics --- 00:21:55.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.369 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:55.369 09:49:42 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:55.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:55.629 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:55.629 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:55.629 09:49:43 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.629 09:49:43 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:55.629 09:49:43 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:55.629 09:49:43 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.629 09:49:43 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:55.629 09:49:43 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:55.629 09:49:43 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:55.629 09:49:43 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:21:55.629 09:49:43 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:55.629 09:49:43 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:55.629 09:49:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:55.629 09:49:43 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83109 00:21:55.629 09:49:43 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:55.629 09:49:43 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83109 00:21:55.629 09:49:43 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83109 ']' 00:21:55.629 09:49:43 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.629 09:49:43 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.629 09:49:43 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.629 09:49:43 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.629 09:49:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:55.888 [2024-11-19 09:49:43.264537] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:21:55.888 [2024-11-19 09:49:43.264631] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.888 [2024-11-19 09:49:43.417773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.888 [2024-11-19 09:49:43.481531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.888 [2024-11-19 09:49:43.481596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.888 [2024-11-19 09:49:43.481610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.888 [2024-11-19 09:49:43.481620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.888 [2024-11-19 09:49:43.481628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.888 [2024-11-19 09:49:43.482085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.147 [2024-11-19 09:49:43.541508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:56.714 09:49:44 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.714 09:49:44 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:21:56.714 09:49:44 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:56.714 09:49:44 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:56.714 09:49:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:56.714 09:49:44 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.714 09:49:44 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:21:56.714 09:49:44 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:56.714 09:49:44 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.714 09:49:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:56.714 [2024-11-19 09:49:44.322542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.714 09:49:44 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.714 09:49:44 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:56.714 09:49:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:56.714 09:49:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:56.714 09:49:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:56.714 ************************************ 00:21:56.714 START TEST fio_dif_1_default 00:21:56.714 ************************************ 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:56.982 bdev_null0 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:56.982 [2024-11-19 09:49:44.366658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:56.982 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:56.983 { 00:21:56.983 "params": { 00:21:56.983 "name": "Nvme$subsystem", 00:21:56.983 "trtype": "$TEST_TRANSPORT", 00:21:56.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.983 "adrfam": "ipv4", 00:21:56.983 "trsvcid": "$NVMF_PORT", 00:21:56.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.983 "hdgst": ${hdgst:-false}, 00:21:56.983 "ddgst": ${ddgst:-false} 00:21:56.983 }, 00:21:56.983 "method": "bdev_nvme_attach_controller" 00:21:56.983 } 00:21:56.983 EOF 00:21:56.983 )") 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:56.983 "params": { 00:21:56.983 "name": "Nvme0", 00:21:56.983 "trtype": "tcp", 00:21:56.983 "traddr": "10.0.0.3", 00:21:56.983 "adrfam": "ipv4", 00:21:56.983 "trsvcid": "4420", 00:21:56.983 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:56.983 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:56.983 "hdgst": false, 00:21:56.983 "ddgst": false 00:21:56.983 }, 00:21:56.983 "method": "bdev_nvme_attach_controller" 00:21:56.983 }' 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:56.983 09:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:57.255 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:57.255 fio-3.35 00:21:57.255 Starting 1 thread 00:22:09.464 00:22:09.464 filename0: (groupid=0, jobs=1): err= 0: pid=83170: Tue Nov 19 09:49:55 2024 00:22:09.464 read: IOPS=8560, BW=33.4MiB/s (35.1MB/s)(334MiB/10001msec) 00:22:09.464 slat (nsec): min=6404, max=83439, avg=8872.76, stdev=3892.07 00:22:09.464 clat (usec): min=342, max=3037, avg=441.06, stdev=41.74 00:22:09.464 lat (usec): min=348, max=3074, avg=449.93, stdev=42.50 00:22:09.464 clat percentiles (usec): 00:22:09.464 | 1.00th=[ 367], 5.00th=[ 383], 10.00th=[ 396], 20.00th=[ 412], 00:22:09.464 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 441], 60.00th=[ 449], 00:22:09.464 | 70.00th=[ 457], 80.00th=[ 469], 90.00th=[ 486], 95.00th=[ 498], 00:22:09.464 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 578], 99.95th=[ 611], 00:22:09.464 | 99.99th=[ 1860] 00:22:09.464 bw ( KiB/s): min=33184, max=35520, per=100.00%, avg=34293.89, stdev=623.62, samples=19 00:22:09.464 iops : min= 8296, max= 8880, avg=8573.47, stdev=155.90, samples=19 00:22:09.464 lat (usec) : 500=95.58%, 750=4.40% 00:22:09.464 lat (msec) : 2=0.01%, 4=0.01% 00:22:09.464 cpu : usr=85.07%, sys=12.77%, ctx=26, majf=0, minf=9 00:22:09.464 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:09.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.464 issued rwts: total=85612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.464 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:09.464 00:22:09.464 Run status group 0 (all jobs): 00:22:09.464 READ: bw=33.4MiB/s (35.1MB/s), 33.4MiB/s-33.4MiB/s (35.1MB/s-35.1MB/s), io=334MiB (351MB), run=10001-10001msec 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:09.464 ************************************ 00:22:09.464 END TEST fio_dif_1_default 00:22:09.464 ************************************ 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.464 00:22:09.464 real 0m11.061s 00:22:09.464 user 0m9.180s 00:22:09.464 sys 0m1.567s 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:09.464 09:49:55 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:09.464 09:49:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:09.464 09:49:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.464 09:49:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:09.464 ************************************ 00:22:09.464 START TEST fio_dif_1_multi_subsystems 00:22:09.464 ************************************ 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:22:09.464 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:09.465 bdev_null0 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:09.465 [2024-11-19 09:49:55.481450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:09.465 bdev_null1 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.465 { 00:22:09.465 "params": { 00:22:09.465 "name": "Nvme$subsystem", 00:22:09.465 "trtype": "$TEST_TRANSPORT", 00:22:09.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.465 "adrfam": "ipv4", 00:22:09.465 "trsvcid": "$NVMF_PORT", 00:22:09.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.465 "hdgst": ${hdgst:-false}, 00:22:09.465 "ddgst": ${ddgst:-false} 00:22:09.465 }, 00:22:09.465 "method": "bdev_nvme_attach_controller" 00:22:09.465 } 00:22:09.465 EOF 00:22:09.465 )") 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:09.465 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.465 { 00:22:09.465 "params": { 00:22:09.465 "name": "Nvme$subsystem", 00:22:09.465 "trtype": "$TEST_TRANSPORT", 00:22:09.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.465 "adrfam": "ipv4", 00:22:09.465 "trsvcid": "$NVMF_PORT", 00:22:09.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.465 "hdgst": ${hdgst:-false}, 00:22:09.465 "ddgst": ${ddgst:-false} 00:22:09.465 }, 00:22:09.466 "method": "bdev_nvme_attach_controller" 00:22:09.466 } 00:22:09.466 EOF 00:22:09.466 )") 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:09.466 "params": { 00:22:09.466 "name": "Nvme0", 00:22:09.466 "trtype": "tcp", 00:22:09.466 "traddr": "10.0.0.3", 00:22:09.466 "adrfam": "ipv4", 00:22:09.466 "trsvcid": "4420", 00:22:09.466 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:09.466 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:09.466 "hdgst": false, 00:22:09.466 "ddgst": false 00:22:09.466 }, 00:22:09.466 "method": "bdev_nvme_attach_controller" 00:22:09.466 },{ 00:22:09.466 "params": { 00:22:09.466 "name": "Nvme1", 00:22:09.466 "trtype": "tcp", 00:22:09.466 "traddr": "10.0.0.3", 00:22:09.466 "adrfam": "ipv4", 00:22:09.466 "trsvcid": "4420", 00:22:09.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.466 "hdgst": false, 00:22:09.466 "ddgst": false 00:22:09.466 }, 00:22:09.466 "method": "bdev_nvme_attach_controller" 00:22:09.466 }' 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:09.466 09:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:09.466 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:09.466 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:09.466 fio-3.35 00:22:09.466 Starting 2 threads 00:22:19.449 00:22:19.449 filename0: (groupid=0, jobs=1): err= 0: pid=83330: Tue Nov 19 09:50:06 2024 00:22:19.449 read: IOPS=4683, BW=18.3MiB/s (19.2MB/s)(183MiB/10001msec) 00:22:19.449 slat (nsec): min=6681, max=96198, avg=13926.46, stdev=4975.07 00:22:19.449 clat (usec): min=447, max=3125, avg=815.63, stdev=45.30 00:22:19.449 lat (usec): min=454, max=3152, avg=829.56, stdev=45.74 00:22:19.449 clat percentiles (usec): 00:22:19.449 | 1.00th=[ 725], 5.00th=[ 758], 10.00th=[ 766], 20.00th=[ 783], 00:22:19.449 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 824], 00:22:19.449 | 70.00th=[ 832], 80.00th=[ 848], 90.00th=[ 865], 95.00th=[ 881], 00:22:19.449 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 971], 99.95th=[ 979], 00:22:19.449 | 99.99th=[ 1020] 00:22:19.449 bw ( KiB/s): min=18624, max=18880, per=50.05%, avg=18748.89, stdev=82.89, samples=19 00:22:19.449 iops : min= 4656, max= 4720, avg=4687.21, stdev=20.71, samples=19 00:22:19.449 lat (usec) : 500=0.03%, 750=3.79%, 1000=96.17% 00:22:19.449 lat (msec) : 2=0.01%, 4=0.01% 00:22:19.449 cpu : usr=89.67%, sys=8.77%, ctx=125, majf=0, minf=0 00:22:19.449 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:19.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:19.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:19.449 issued rwts: total=46836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:19.449 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:19.449 filename1: (groupid=0, jobs=1): err= 0: pid=83331: Tue Nov 19 09:50:06 2024 00:22:19.449 read: IOPS=4681, BW=18.3MiB/s (19.2MB/s)(183MiB/10001msec) 00:22:19.449 slat (usec): min=4, max=130, avg=13.71, stdev= 4.83 00:22:19.449 clat (usec): min=614, max=4334, avg=817.35, stdev=60.59 00:22:19.449 lat (usec): min=623, max=4366, avg=831.05, stdev=61.53 00:22:19.449 clat percentiles (usec): 00:22:19.449 | 1.00th=[ 701], 5.00th=[ 734], 10.00th=[ 750], 20.00th=[ 775], 00:22:19.449 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 832], 00:22:19.449 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 898], 00:22:19.449 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 979], 99.95th=[ 996], 00:22:19.449 | 99.99th=[ 2409] 00:22:19.449 bw ( KiB/s): min=18560, max=18880, per=50.04%, avg=18743.58, stdev=87.21, samples=19 00:22:19.449 iops : min= 4640, max= 4720, avg=4685.89, stdev=21.80, samples=19 00:22:19.449 lat (usec) : 750=9.51%, 1000=90.45% 00:22:19.449 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01% 00:22:19.449 cpu : usr=90.16%, sys=8.32%, ctx=107, majf=0, minf=0 00:22:19.449 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:19.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:19.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:19.449 issued rwts: total=46816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:19.449 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:19.449 00:22:19.449 Run status group 0 (all jobs): 00:22:19.449 READ: bw=36.6MiB/s (38.4MB/s), 18.3MiB/s-18.3MiB/s (19.2MB/s-19.2MB/s), io=366MiB (384MB), run=10001-10001msec 00:22:19.449 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:22:19.449 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:22:19.449 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:19.449 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:19.449 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:22:19.449 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:19.449 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.449 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:19.449 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.449 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:19.449 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:19.450 ************************************ 00:22:19.450 END TEST fio_dif_1_multi_subsystems 00:22:19.450 ************************************ 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.450 00:22:19.450 real 0m11.206s 00:22:19.450 user 0m18.807s 00:22:19.450 sys 0m1.984s 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.450 09:50:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:19.450 09:50:06 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:22:19.450 09:50:06 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:19.450 09:50:06 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.450 09:50:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:19.450 ************************************ 00:22:19.450 START TEST fio_dif_rand_params 00:22:19.450 ************************************ 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:19.450 bdev_null0 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:19.450 [2024-11-19 09:50:06.750622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:19.450 { 00:22:19.450 "params": { 00:22:19.450 "name": "Nvme$subsystem", 00:22:19.450 "trtype": "$TEST_TRANSPORT", 00:22:19.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.450 "adrfam": "ipv4", 00:22:19.450 "trsvcid": "$NVMF_PORT", 00:22:19.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.450 "hdgst": ${hdgst:-false}, 00:22:19.450 "ddgst": ${ddgst:-false} 00:22:19.450 }, 00:22:19.450 "method": "bdev_nvme_attach_controller" 00:22:19.450 } 00:22:19.450 EOF 00:22:19.450 )") 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:19.450 "params": { 00:22:19.450 "name": "Nvme0", 00:22:19.450 "trtype": "tcp", 00:22:19.450 "traddr": "10.0.0.3", 00:22:19.450 "adrfam": "ipv4", 00:22:19.450 "trsvcid": "4420", 00:22:19.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:19.450 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:19.450 "hdgst": false, 00:22:19.450 "ddgst": false 00:22:19.450 }, 00:22:19.450 "method": "bdev_nvme_attach_controller" 00:22:19.450 }' 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:19.450 09:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:19.450 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:19.450 ... 00:22:19.450 fio-3.35 00:22:19.450 Starting 3 threads 00:22:26.014 00:22:26.014 filename0: (groupid=0, jobs=1): err= 0: pid=83487: Tue Nov 19 09:50:12 2024 00:22:26.014 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(162MiB/5006msec) 00:22:26.014 slat (nsec): min=6898, max=53220, avg=10619.72, stdev=4712.34 00:22:26.014 clat (usec): min=4586, max=13679, avg=11587.25, stdev=452.21 00:22:26.014 lat (usec): min=4595, max=13690, avg=11597.86, stdev=452.18 00:22:26.014 clat percentiles (usec): 00:22:26.014 | 1.00th=[10945], 5.00th=[11207], 10.00th=[11338], 20.00th=[11338], 00:22:26.014 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:22:26.014 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11863], 95.00th=[11994], 00:22:26.014 | 99.00th=[12780], 99.50th=[13435], 99.90th=[13698], 99.95th=[13698], 00:22:26.014 | 99.99th=[13698] 00:22:26.014 bw ( KiB/s): min=32256, max=33792, per=33.33%, avg=33009.33, stdev=386.51, samples=9 00:22:26.014 iops : min= 252, max= 264, avg=257.78, stdev= 3.07, samples=9 00:22:26.014 lat (msec) : 10=0.23%, 20=99.77% 00:22:26.014 cpu : usr=90.31%, sys=8.89%, ctx=81, majf=0, minf=0 00:22:26.014 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:26.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.014 issued rwts: total=1293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.014 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:26.014 filename0: (groupid=0, jobs=1): err= 0: pid=83488: Tue Nov 19 09:50:12 2024 00:22:26.014 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(161MiB/5001msec) 00:22:26.014 slat (nsec): min=7366, max=47584, avg=14798.57, stdev=3850.53 00:22:26.014 clat (usec): min=9352, max=13806, avg=11596.92, stdev=329.42 00:22:26.014 lat (usec): min=9359, max=13821, avg=11611.72, stdev=329.67 00:22:26.014 clat percentiles (usec): 00:22:26.014 | 1.00th=[10945], 5.00th=[11207], 10.00th=[11338], 20.00th=[11338], 00:22:26.014 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:22:26.014 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11863], 95.00th=[11994], 00:22:26.014 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13829], 99.95th=[13829], 00:22:26.014 | 99.99th=[13829] 00:22:26.014 bw ( KiB/s): min=32256, max=33792, per=33.26%, avg=32938.67, stdev=461.51, samples=9 00:22:26.014 iops : min= 252, max= 264, avg=257.33, stdev= 3.61, samples=9 00:22:26.014 lat (msec) : 10=0.23%, 20=99.77% 00:22:26.014 cpu : usr=90.06%, sys=9.36%, ctx=58, majf=0, minf=0 00:22:26.014 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:26.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.014 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.014 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:26.014 filename0: (groupid=0, jobs=1): err= 0: pid=83489: Tue Nov 19 09:50:12 2024 00:22:26.014 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(161MiB/5001msec) 00:22:26.014 slat (nsec): min=7670, max=93803, avg=14384.16, stdev=4253.56 00:22:26.014 clat (usec): min=9363, max=13802, avg=11598.39, stdev=331.36 00:22:26.014 lat (usec): min=9376, max=13814, avg=11612.77, stdev=331.56 00:22:26.014 clat percentiles (usec): 00:22:26.014 | 1.00th=[10945], 5.00th=[11207], 10.00th=[11338], 20.00th=[11338], 00:22:26.014 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11600], 60.00th=[11600], 00:22:26.014 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11863], 95.00th=[11994], 00:22:26.014 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13829], 99.95th=[13829], 00:22:26.014 | 99.99th=[13829] 00:22:26.014 bw ( KiB/s): min=32256, max=33792, per=33.26%, avg=32938.67, stdev=461.51, samples=9 00:22:26.014 iops : min= 252, max= 264, avg=257.33, stdev= 3.61, samples=9 00:22:26.014 lat (msec) : 10=0.23%, 20=99.77% 00:22:26.014 cpu : usr=91.32%, sys=8.08%, ctx=53, majf=0, minf=0 00:22:26.014 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:26.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.014 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.014 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:26.014 00:22:26.014 Run status group 0 (all jobs): 00:22:26.014 READ: bw=96.7MiB/s (101MB/s), 32.2MiB/s-32.3MiB/s (33.8MB/s-33.9MB/s), io=484MiB (508MB), run=5001-5006msec 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:26.014 bdev_null0 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:26.014 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:26.015 [2024-11-19 09:50:12.832146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:26.015 bdev_null1 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:26.015 bdev_null2 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:26.015 { 00:22:26.015 "params": { 00:22:26.015 "name": "Nvme$subsystem", 00:22:26.015 "trtype": "$TEST_TRANSPORT", 00:22:26.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:26.015 "adrfam": "ipv4", 00:22:26.015 "trsvcid": "$NVMF_PORT", 00:22:26.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:26.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:26.015 "hdgst": ${hdgst:-false}, 00:22:26.015 "ddgst": ${ddgst:-false} 00:22:26.015 }, 00:22:26.015 "method": "bdev_nvme_attach_controller" 00:22:26.015 } 00:22:26.015 EOF 00:22:26.015 )") 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:26.015 { 00:22:26.015 "params": { 00:22:26.015 "name": "Nvme$subsystem", 00:22:26.015 "trtype": "$TEST_TRANSPORT", 00:22:26.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:26.015 "adrfam": "ipv4", 00:22:26.015 "trsvcid": "$NVMF_PORT", 00:22:26.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:26.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:26.015 "hdgst": ${hdgst:-false}, 00:22:26.015 "ddgst": ${ddgst:-false} 00:22:26.015 }, 00:22:26.015 "method": "bdev_nvme_attach_controller" 00:22:26.015 } 00:22:26.015 EOF 00:22:26.015 )") 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:26.015 { 00:22:26.015 "params": { 00:22:26.015 "name": "Nvme$subsystem", 00:22:26.015 "trtype": "$TEST_TRANSPORT", 00:22:26.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:26.015 "adrfam": "ipv4", 00:22:26.015 "trsvcid": "$NVMF_PORT", 00:22:26.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:26.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:26.015 "hdgst": ${hdgst:-false}, 00:22:26.015 "ddgst": ${ddgst:-false} 00:22:26.015 }, 00:22:26.015 "method": "bdev_nvme_attach_controller" 00:22:26.015 } 00:22:26.015 EOF 00:22:26.015 )") 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:22:26.015 09:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:26.015 "params": { 00:22:26.015 "name": "Nvme0", 00:22:26.015 "trtype": "tcp", 00:22:26.015 "traddr": "10.0.0.3", 00:22:26.015 "adrfam": "ipv4", 00:22:26.015 "trsvcid": "4420", 00:22:26.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:26.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:26.015 "hdgst": false, 00:22:26.015 "ddgst": false 00:22:26.015 }, 00:22:26.015 "method": "bdev_nvme_attach_controller" 00:22:26.015 },{ 00:22:26.015 "params": { 00:22:26.016 "name": "Nvme1", 00:22:26.016 "trtype": "tcp", 00:22:26.016 "traddr": "10.0.0.3", 00:22:26.016 "adrfam": "ipv4", 00:22:26.016 "trsvcid": "4420", 00:22:26.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:26.016 "hdgst": false, 00:22:26.016 "ddgst": false 00:22:26.016 }, 00:22:26.016 "method": "bdev_nvme_attach_controller" 00:22:26.016 },{ 00:22:26.016 "params": { 00:22:26.016 "name": "Nvme2", 00:22:26.016 "trtype": "tcp", 00:22:26.016 "traddr": "10.0.0.3", 00:22:26.016 "adrfam": "ipv4", 00:22:26.016 "trsvcid": "4420", 00:22:26.016 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:26.016 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:26.016 "hdgst": false, 00:22:26.016 "ddgst": false 00:22:26.016 }, 00:22:26.016 "method": "bdev_nvme_attach_controller" 00:22:26.016 }' 00:22:26.016 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:26.016 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:26.016 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:26.016 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:26.016 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:26.016 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:26.016 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:26.016 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:26.016 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:26.016 09:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:26.016 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:26.016 ... 00:22:26.016 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:26.016 ... 00:22:26.016 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:26.016 ... 00:22:26.016 fio-3.35 00:22:26.016 Starting 24 threads 00:22:38.247 00:22:38.247 filename0: (groupid=0, jobs=1): err= 0: pid=83588: Tue Nov 19 09:50:23 2024 00:22:38.247 read: IOPS=223, BW=895KiB/s (916kB/s)(8988KiB/10045msec) 00:22:38.247 slat (usec): min=5, max=4045, avg=23.86, stdev=181.33 00:22:38.247 clat (msec): min=20, max=120, avg=71.31, stdev=18.49 00:22:38.247 lat (msec): min=20, max=120, avg=71.33, stdev=18.49 00:22:38.247 clat percentiles (msec): 00:22:38.247 | 1.00th=[ 32], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 54], 00:22:38.247 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 75], 00:22:38.247 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 94], 95.00th=[ 107], 00:22:38.247 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 122], 99.95th=[ 122], 00:22:38.247 | 99.99th=[ 122] 00:22:38.247 bw ( KiB/s): min= 694, max= 1296, per=3.97%, avg=892.30, stdev=127.54, samples=20 00:22:38.247 iops : min= 173, max= 324, avg=223.05, stdev=31.93, samples=20 00:22:38.247 lat (msec) : 50=15.26%, 100=77.79%, 250=6.94% 00:22:38.247 cpu : usr=41.26%, sys=1.94%, ctx=1510, majf=0, minf=9 00:22:38.247 IO depths : 1=0.1%, 2=2.1%, 4=8.4%, 8=74.5%, 16=15.0%, 32=0.0%, >=64=0.0% 00:22:38.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.247 complete : 0=0.0%, 4=89.5%, 8=8.6%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.247 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.247 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.247 filename0: (groupid=0, jobs=1): err= 0: pid=83589: Tue Nov 19 09:50:23 2024 00:22:38.247 read: IOPS=243, BW=973KiB/s (996kB/s)(9756KiB/10027msec) 00:22:38.247 slat (usec): min=4, max=11028, avg=22.98, stdev=250.78 00:22:38.247 clat (msec): min=14, max=126, avg=65.65, stdev=18.68 00:22:38.247 lat (msec): min=14, max=126, avg=65.67, stdev=18.68 00:22:38.247 clat percentiles (msec): 00:22:38.247 | 1.00th=[ 25], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 48], 00:22:38.247 | 30.00th=[ 53], 40.00th=[ 60], 50.00th=[ 68], 60.00th=[ 72], 00:22:38.247 | 70.00th=[ 74], 80.00th=[ 80], 90.00th=[ 87], 95.00th=[ 102], 00:22:38.247 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 127], 99.95th=[ 127], 00:22:38.247 | 99.99th=[ 127] 00:22:38.247 bw ( KiB/s): min= 712, max= 1472, per=4.33%, avg=974.32, stdev=147.24, samples=19 00:22:38.247 iops : min= 178, max= 368, avg=243.58, stdev=36.81, samples=19 00:22:38.247 lat (msec) : 20=0.78%, 50=25.26%, 100=68.63%, 250=5.33% 00:22:38.247 cpu : usr=41.80%, sys=2.11%, ctx=1359, majf=0, minf=9 00:22:38.247 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:22:38.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.247 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.247 issued rwts: total=2439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.247 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.247 filename0: (groupid=0, jobs=1): err= 0: pid=83590: Tue Nov 19 09:50:23 2024 00:22:38.247 read: IOPS=242, BW=971KiB/s (994kB/s)(9720KiB/10012msec) 00:22:38.247 slat (usec): min=4, max=4032, avg=20.79, stdev=141.08 00:22:38.247 clat (msec): min=23, max=129, avg=65.81, stdev=18.14 00:22:38.247 lat (msec): min=23, max=129, avg=65.83, stdev=18.14 00:22:38.247 clat percentiles (msec): 00:22:38.247 | 1.00th=[ 28], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 48], 00:22:38.247 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:22:38.247 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 100], 00:22:38.247 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:22:38.247 | 99.99th=[ 130] 00:22:38.247 bw ( KiB/s): min= 712, max= 1216, per=4.29%, avg=964.95, stdev=105.71, samples=19 00:22:38.247 iops : min= 178, max= 304, avg=241.21, stdev=26.48, samples=19 00:22:38.247 lat (msec) : 50=27.24%, 100=67.82%, 250=4.94% 00:22:38.247 cpu : usr=36.41%, sys=2.04%, ctx=1071, majf=0, minf=9 00:22:38.247 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:22:38.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.247 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.247 issued rwts: total=2430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.247 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.247 filename0: (groupid=0, jobs=1): err= 0: pid=83591: Tue Nov 19 09:50:23 2024 00:22:38.247 read: IOPS=227, BW=910KiB/s (932kB/s)(9128KiB/10026msec) 00:22:38.247 slat (usec): min=4, max=8027, avg=17.94, stdev=167.82 00:22:38.247 clat (msec): min=26, max=127, avg=70.17, stdev=18.38 00:22:38.247 lat (msec): min=26, max=127, avg=70.19, stdev=18.38 00:22:38.247 clat percentiles (msec): 00:22:38.247 | 1.00th=[ 29], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:22:38.247 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:22:38.247 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 106], 00:22:38.247 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:22:38.247 | 99.99th=[ 128] 00:22:38.248 bw ( KiB/s): min= 688, max= 1264, per=4.04%, avg=907.37, stdev=117.89, samples=19 00:22:38.248 iops : min= 172, max= 316, avg=226.84, stdev=29.47, samples=19 00:22:38.248 lat (msec) : 50=18.97%, 100=74.85%, 250=6.18% 00:22:38.248 cpu : usr=36.93%, sys=1.98%, ctx=1007, majf=0, minf=9 00:22:38.248 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:22:38.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 issued rwts: total=2282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.248 filename0: (groupid=0, jobs=1): err= 0: pid=83592: Tue Nov 19 09:50:23 2024 00:22:38.248 read: IOPS=238, BW=952KiB/s (975kB/s)(9544KiB/10023msec) 00:22:38.248 slat (usec): min=4, max=8033, avg=41.84, stdev=463.31 00:22:38.248 clat (msec): min=30, max=120, avg=66.99, stdev=18.63 00:22:38.248 lat (msec): min=30, max=120, avg=67.03, stdev=18.64 00:22:38.248 clat percentiles (msec): 00:22:38.248 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 48], 00:22:38.248 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:22:38.248 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 87], 95.00th=[ 107], 00:22:38.248 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:22:38.248 | 99.99th=[ 121] 00:22:38.248 bw ( KiB/s): min= 712, max= 1154, per=4.21%, avg=947.05, stdev=95.91, samples=19 00:22:38.248 iops : min= 178, max= 288, avg=236.74, stdev=23.92, samples=19 00:22:38.248 lat (msec) : 50=27.49%, 100=66.43%, 250=6.08% 00:22:38.248 cpu : usr=31.57%, sys=1.45%, ctx=859, majf=0, minf=9 00:22:38.248 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:22:38.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 issued rwts: total=2386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.248 filename0: (groupid=0, jobs=1): err= 0: pid=83593: Tue Nov 19 09:50:23 2024 00:22:38.248 read: IOPS=227, BW=910KiB/s (932kB/s)(9136KiB/10037msec) 00:22:38.248 slat (usec): min=5, max=8024, avg=19.87, stdev=237.05 00:22:38.248 clat (msec): min=17, max=143, avg=70.15, stdev=19.56 00:22:38.248 lat (msec): min=17, max=143, avg=70.17, stdev=19.56 00:22:38.248 clat percentiles (msec): 00:22:38.248 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 50], 00:22:38.248 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 72], 00:22:38.248 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:22:38.248 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 144], 00:22:38.248 | 99.99th=[ 144] 00:22:38.248 bw ( KiB/s): min= 632, max= 1460, per=4.04%, avg=908.70, stdev=156.69, samples=20 00:22:38.248 iops : min= 158, max= 365, avg=227.15, stdev=39.17, samples=20 00:22:38.248 lat (msec) : 20=0.61%, 50=20.58%, 100=71.76%, 250=7.05% 00:22:38.248 cpu : usr=31.70%, sys=1.53%, ctx=857, majf=0, minf=9 00:22:38.248 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=81.7%, 16=16.8%, 32=0.0%, >=64=0.0% 00:22:38.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 issued rwts: total=2284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.248 filename0: (groupid=0, jobs=1): err= 0: pid=83594: Tue Nov 19 09:50:23 2024 00:22:38.248 read: IOPS=227, BW=911KiB/s (932kB/s)(9160KiB/10060msec) 00:22:38.248 slat (usec): min=4, max=4022, avg=21.43, stdev=175.97 00:22:38.248 clat (msec): min=2, max=150, avg=70.10, stdev=23.14 00:22:38.248 lat (msec): min=2, max=150, avg=70.12, stdev=23.14 00:22:38.248 clat percentiles (msec): 00:22:38.248 | 1.00th=[ 5], 5.00th=[ 26], 10.00th=[ 45], 20.00th=[ 53], 00:22:38.248 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 77], 00:22:38.248 | 70.00th=[ 80], 80.00th=[ 87], 90.00th=[ 97], 95.00th=[ 107], 00:22:38.248 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 140], 99.95th=[ 144], 00:22:38.248 | 99.99th=[ 150] 00:22:38.248 bw ( KiB/s): min= 664, max= 2032, per=4.04%, avg=909.60, stdev=279.53, samples=20 00:22:38.248 iops : min= 166, max= 508, avg=227.40, stdev=69.88, samples=20 00:22:38.248 lat (msec) : 4=0.96%, 10=1.83%, 20=1.40%, 50=13.14%, 100=75.33% 00:22:38.248 lat (msec) : 250=7.34% 00:22:38.248 cpu : usr=40.92%, sys=2.60%, ctx=1480, majf=0, minf=0 00:22:38.248 IO depths : 1=0.1%, 2=2.1%, 4=8.3%, 8=74.2%, 16=15.3%, 32=0.0%, >=64=0.0% 00:22:38.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 complete : 0=0.0%, 4=89.7%, 8=8.4%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 issued rwts: total=2290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.248 filename0: (groupid=0, jobs=1): err= 0: pid=83595: Tue Nov 19 09:50:23 2024 00:22:38.248 read: IOPS=236, BW=948KiB/s (970kB/s)(9500KiB/10024msec) 00:22:38.248 slat (usec): min=4, max=8033, avg=21.76, stdev=232.64 00:22:38.248 clat (msec): min=17, max=144, avg=67.41, stdev=18.40 00:22:38.248 lat (msec): min=17, max=144, avg=67.43, stdev=18.41 00:22:38.248 clat percentiles (msec): 00:22:38.248 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 49], 00:22:38.248 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:22:38.248 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 90], 95.00th=[ 99], 00:22:38.248 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:22:38.248 | 99.99th=[ 144] 00:22:38.248 bw ( KiB/s): min= 688, max= 1272, per=4.21%, avg=946.11, stdev=116.93, samples=19 00:22:38.248 iops : min= 172, max= 318, avg=236.53, stdev=29.23, samples=19 00:22:38.248 lat (msec) : 20=0.08%, 50=22.82%, 100=72.42%, 250=4.67% 00:22:38.248 cpu : usr=36.07%, sys=2.10%, ctx=1127, majf=0, minf=9 00:22:38.248 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:22:38.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 issued rwts: total=2375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.248 filename1: (groupid=0, jobs=1): err= 0: pid=83596: Tue Nov 19 09:50:23 2024 00:22:38.248 read: IOPS=244, BW=978KiB/s (1001kB/s)(9784KiB/10004msec) 00:22:38.248 slat (usec): min=4, max=8028, avg=22.46, stdev=207.65 00:22:38.248 clat (msec): min=10, max=125, avg=65.36, stdev=18.32 00:22:38.248 lat (msec): min=10, max=125, avg=65.38, stdev=18.33 00:22:38.248 clat percentiles (msec): 00:22:38.248 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 48], 00:22:38.248 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 68], 60.00th=[ 72], 00:22:38.248 | 70.00th=[ 74], 80.00th=[ 79], 90.00th=[ 87], 95.00th=[ 100], 00:22:38.248 | 99.00th=[ 114], 99.50th=[ 120], 99.90th=[ 127], 99.95th=[ 127], 00:22:38.248 | 99.99th=[ 127] 00:22:38.248 bw ( KiB/s): min= 768, max= 1168, per=4.32%, avg=970.53, stdev=96.22, samples=19 00:22:38.248 iops : min= 192, max= 292, avg=242.63, stdev=24.06, samples=19 00:22:38.248 lat (msec) : 20=0.25%, 50=25.55%, 100=69.46%, 250=4.74% 00:22:38.248 cpu : usr=41.17%, sys=2.13%, ctx=1273, majf=0, minf=0 00:22:38.248 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.9%, 16=15.5%, 32=0.0%, >=64=0.0% 00:22:38.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 complete : 0=0.0%, 4=86.9%, 8=12.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 issued rwts: total=2446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.248 filename1: (groupid=0, jobs=1): err= 0: pid=83597: Tue Nov 19 09:50:23 2024 00:22:38.248 read: IOPS=238, BW=956KiB/s (979kB/s)(9612KiB/10057msec) 00:22:38.248 slat (usec): min=4, max=8026, avg=16.42, stdev=163.54 00:22:38.248 clat (msec): min=4, max=131, avg=66.83, stdev=22.44 00:22:38.248 lat (msec): min=4, max=131, avg=66.85, stdev=22.44 00:22:38.248 clat percentiles (msec): 00:22:38.248 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 43], 20.00th=[ 49], 00:22:38.248 | 30.00th=[ 57], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 73], 00:22:38.248 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 105], 00:22:38.248 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 127], 99.95th=[ 131], 00:22:38.248 | 99.99th=[ 132] 00:22:38.248 bw ( KiB/s): min= 664, max= 2155, per=4.25%, avg=955.35, stdev=293.83, samples=20 00:22:38.248 iops : min= 166, max= 538, avg=238.80, stdev=73.30, samples=20 00:22:38.248 lat (msec) : 10=2.25%, 20=2.66%, 50=17.69%, 100=71.20%, 250=6.20% 00:22:38.248 cpu : usr=36.83%, sys=2.33%, ctx=1187, majf=0, minf=9 00:22:38.248 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.7%, 16=16.6%, 32=0.0%, >=64=0.0% 00:22:38.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 complete : 0=0.0%, 4=87.6%, 8=12.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 issued rwts: total=2403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.248 filename1: (groupid=0, jobs=1): err= 0: pid=83598: Tue Nov 19 09:50:23 2024 00:22:38.248 read: IOPS=235, BW=941KiB/s (964kB/s)(9444KiB/10031msec) 00:22:38.248 slat (usec): min=4, max=8024, avg=23.44, stdev=242.72 00:22:38.248 clat (msec): min=19, max=119, avg=67.83, stdev=18.67 00:22:38.248 lat (msec): min=19, max=119, avg=67.86, stdev=18.67 00:22:38.248 clat percentiles (msec): 00:22:38.248 | 1.00th=[ 28], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 50], 00:22:38.248 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:22:38.248 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 104], 00:22:38.248 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:22:38.248 | 99.99th=[ 121] 00:22:38.248 bw ( KiB/s): min= 720, max= 1456, per=4.18%, avg=939.90, stdev=140.34, samples=20 00:22:38.248 iops : min= 180, max= 364, avg=234.95, stdev=35.08, samples=20 00:22:38.248 lat (msec) : 20=0.59%, 50=20.20%, 100=73.27%, 250=5.93% 00:22:38.248 cpu : usr=37.99%, sys=1.97%, ctx=1125, majf=0, minf=9 00:22:38.248 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:22:38.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.248 issued rwts: total=2361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.248 filename1: (groupid=0, jobs=1): err= 0: pid=83599: Tue Nov 19 09:50:23 2024 00:22:38.249 read: IOPS=227, BW=910KiB/s (932kB/s)(9128KiB/10032msec) 00:22:38.249 slat (usec): min=4, max=8037, avg=24.99, stdev=290.44 00:22:38.249 clat (msec): min=28, max=143, avg=70.19, stdev=18.35 00:22:38.249 lat (msec): min=28, max=143, avg=70.22, stdev=18.35 00:22:38.249 clat percentiles (msec): 00:22:38.249 | 1.00th=[ 32], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:22:38.249 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 72], 00:22:38.249 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 107], 00:22:38.249 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:22:38.249 | 99.99th=[ 144] 00:22:38.249 bw ( KiB/s): min= 608, max= 1208, per=4.03%, avg=906.40, stdev=120.63, samples=20 00:22:38.249 iops : min= 152, max= 302, avg=226.60, stdev=30.16, samples=20 00:22:38.249 lat (msec) : 50=19.59%, 100=74.80%, 250=5.61% 00:22:38.249 cpu : usr=31.63%, sys=1.41%, ctx=861, majf=0, minf=9 00:22:38.249 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:22:38.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.249 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.249 issued rwts: total=2282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.249 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.249 filename1: (groupid=0, jobs=1): err= 0: pid=83600: Tue Nov 19 09:50:23 2024 00:22:38.249 read: IOPS=240, BW=961KiB/s (984kB/s)(9624KiB/10016msec) 00:22:38.249 slat (usec): min=3, max=8038, avg=34.25, stdev=399.77 00:22:38.249 clat (msec): min=21, max=120, avg=66.45, stdev=18.34 00:22:38.249 lat (msec): min=21, max=120, avg=66.49, stdev=18.35 00:22:38.249 clat percentiles (msec): 00:22:38.249 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 48], 00:22:38.249 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:22:38.249 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 100], 00:22:38.249 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:22:38.249 | 99.99th=[ 121] 00:22:38.249 bw ( KiB/s): min= 712, max= 1216, per=4.24%, avg=954.26, stdev=108.07, samples=19 00:22:38.249 iops : min= 178, max= 304, avg=238.53, stdev=27.04, samples=19 00:22:38.249 lat (msec) : 50=28.22%, 100=67.17%, 250=4.61% 00:22:38.249 cpu : usr=31.72%, sys=1.61%, ctx=856, majf=0, minf=9 00:22:38.249 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:22:38.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.249 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.249 issued rwts: total=2406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.249 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.249 filename1: (groupid=0, jobs=1): err= 0: pid=83601: Tue Nov 19 09:50:23 2024 00:22:38.249 read: IOPS=242, BW=971KiB/s (994kB/s)(9732KiB/10025msec) 00:22:38.249 slat (usec): min=3, max=4043, avg=25.14, stdev=180.45 00:22:38.249 clat (msec): min=29, max=128, avg=65.76, stdev=18.11 00:22:38.249 lat (msec): min=29, max=128, avg=65.79, stdev=18.12 00:22:38.249 clat percentiles (msec): 00:22:38.249 | 1.00th=[ 31], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 49], 00:22:38.249 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 68], 60.00th=[ 71], 00:22:38.249 | 70.00th=[ 75], 80.00th=[ 79], 90.00th=[ 88], 95.00th=[ 102], 00:22:38.249 | 99.00th=[ 117], 99.50th=[ 118], 99.90th=[ 123], 99.95th=[ 123], 00:22:38.249 | 99.99th=[ 129] 00:22:38.249 bw ( KiB/s): min= 720, max= 1298, per=4.31%, avg=968.95, stdev=112.67, samples=19 00:22:38.249 iops : min= 180, max= 324, avg=242.21, stdev=28.09, samples=19 00:22:38.249 lat (msec) : 50=23.10%, 100=71.80%, 250=5.10% 00:22:38.249 cpu : usr=48.91%, sys=2.90%, ctx=1372, majf=0, minf=9 00:22:38.249 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:22:38.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.249 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.249 issued rwts: total=2433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.249 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.249 filename1: (groupid=0, jobs=1): err= 0: pid=83602: Tue Nov 19 09:50:23 2024 00:22:38.249 read: IOPS=245, BW=982KiB/s (1006kB/s)(9824KiB/10001msec) 00:22:38.249 slat (usec): min=4, max=8025, avg=30.32, stdev=300.39 00:22:38.249 clat (usec): min=1793, max=122047, avg=65035.75, stdev=19809.32 00:22:38.249 lat (usec): min=1801, max=122058, avg=65066.07, stdev=19815.97 00:22:38.249 clat percentiles (msec): 00:22:38.249 | 1.00th=[ 4], 5.00th=[ 38], 10.00th=[ 46], 20.00th=[ 48], 00:22:38.249 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:22:38.249 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 99], 00:22:38.249 | 99.00th=[ 115], 99.50th=[ 117], 99.90th=[ 123], 99.95th=[ 123], 00:22:38.249 | 99.99th=[ 123] 00:22:38.249 bw ( KiB/s): min= 768, max= 1072, per=4.25%, avg=956.21, stdev=83.84, samples=19 00:22:38.249 iops : min= 192, max= 268, avg=239.05, stdev=20.96, samples=19 00:22:38.249 lat (msec) : 2=0.12%, 4=1.26%, 10=0.41%, 20=0.16%, 50=27.00% 00:22:38.249 lat (msec) : 100=66.57%, 250=4.48% 00:22:38.249 cpu : usr=36.67%, sys=1.79%, ctx=1105, majf=0, minf=9 00:22:38.249 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=82.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:22:38.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.249 complete : 0=0.0%, 4=87.0%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.249 issued rwts: total=2456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.249 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.249 filename1: (groupid=0, jobs=1): err= 0: pid=83603: Tue Nov 19 09:50:23 2024 00:22:38.249 read: IOPS=246, BW=986KiB/s (1009kB/s)(9856KiB/10001msec) 00:22:38.249 slat (usec): min=4, max=8049, avg=27.98, stdev=322.85 00:22:38.249 clat (usec): min=685, max=124011, avg=64826.22, stdev=19923.07 00:22:38.249 lat (usec): min=693, max=124027, avg=64854.20, stdev=19922.40 00:22:38.249 clat percentiles (msec): 00:22:38.249 | 1.00th=[ 4], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:22:38.249 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:22:38.249 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 86], 95.00th=[ 97], 00:22:38.249 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 125], 00:22:38.249 | 99.99th=[ 125] 00:22:38.249 bw ( KiB/s): min= 768, max= 1114, per=4.28%, avg=962.63, stdev=94.72, samples=19 00:22:38.249 iops : min= 192, max= 278, avg=240.63, stdev=23.64, samples=19 00:22:38.249 lat (usec) : 750=0.12% 00:22:38.249 lat (msec) : 4=1.30%, 10=0.53%, 20=0.12%, 50=27.80%, 100=65.38% 00:22:38.249 lat (msec) : 250=4.75% 00:22:38.249 cpu : usr=33.00%, sys=1.72%, ctx=929, majf=0, minf=9 00:22:38.249 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:22:38.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.249 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.249 issued rwts: total=2464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.249 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.249 filename2: (groupid=0, jobs=1): err= 0: pid=83604: Tue Nov 19 09:50:23 2024 00:22:38.249 read: IOPS=230, BW=921KiB/s (943kB/s)(9252KiB/10047msec) 00:22:38.249 slat (usec): min=4, max=8025, avg=19.35, stdev=186.66 00:22:38.249 clat (msec): min=2, max=143, avg=69.30, stdev=22.79 00:22:38.249 lat (msec): min=2, max=144, avg=69.32, stdev=22.79 00:22:38.249 clat percentiles (msec): 00:22:38.249 | 1.00th=[ 5], 5.00th=[ 24], 10.00th=[ 46], 20.00th=[ 50], 00:22:38.249 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:22:38.249 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:22:38.249 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:22:38.249 | 99.99th=[ 144] 00:22:38.249 bw ( KiB/s): min= 600, max= 1920, per=4.10%, avg=921.30, stdev=253.84, samples=20 00:22:38.249 iops : min= 150, max= 480, avg=230.30, stdev=63.45, samples=20 00:22:38.249 lat (msec) : 4=0.69%, 10=2.33%, 20=1.64%, 50=16.08%, 100=72.76% 00:22:38.249 lat (msec) : 250=6.49% 00:22:38.249 cpu : usr=36.37%, sys=2.16%, ctx=1077, majf=0, minf=9 00:22:38.249 IO depths : 1=0.1%, 2=1.2%, 4=4.5%, 8=78.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:22:38.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.249 complete : 0=0.0%, 4=88.8%, 8=10.2%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.249 issued rwts: total=2313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.249 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.249 filename2: (groupid=0, jobs=1): err= 0: pid=83605: Tue Nov 19 09:50:23 2024 00:22:38.249 read: IOPS=215, BW=861KiB/s (882kB/s)(8640KiB/10036msec) 00:22:38.249 slat (usec): min=4, max=4029, avg=17.67, stdev=115.90 00:22:38.249 clat (msec): min=11, max=168, avg=74.19, stdev=24.24 00:22:38.249 lat (msec): min=11, max=168, avg=74.21, stdev=24.24 00:22:38.249 clat percentiles (msec): 00:22:38.249 | 1.00th=[ 13], 5.00th=[ 29], 10.00th=[ 48], 20.00th=[ 57], 00:22:38.249 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 78], 00:22:38.249 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 104], 95.00th=[ 121], 00:22:38.249 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 165], 99.95th=[ 169], 00:22:38.249 | 99.99th=[ 169] 00:22:38.249 bw ( KiB/s): min= 512, max= 1664, per=3.82%, avg=858.80, stdev=225.02, samples=20 00:22:38.249 iops : min= 128, max= 416, avg=214.70, stdev=56.26, samples=20 00:22:38.249 lat (msec) : 20=2.96%, 50=11.81%, 100=73.10%, 250=12.13% 00:22:38.249 cpu : usr=43.04%, sys=2.46%, ctx=1656, majf=0, minf=9 00:22:38.249 IO depths : 1=0.1%, 2=3.3%, 4=13.4%, 8=68.8%, 16=14.4%, 32=0.0%, >=64=0.0% 00:22:38.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.249 complete : 0=0.0%, 4=91.1%, 8=5.9%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.249 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.249 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.249 filename2: (groupid=0, jobs=1): err= 0: pid=83606: Tue Nov 19 09:50:23 2024 00:22:38.249 read: IOPS=232, BW=930KiB/s (952kB/s)(9340KiB/10044msec) 00:22:38.249 slat (usec): min=5, max=8026, avg=30.26, stdev=370.45 00:22:38.249 clat (msec): min=16, max=121, avg=68.65, stdev=19.09 00:22:38.249 lat (msec): min=16, max=121, avg=68.68, stdev=19.12 00:22:38.249 clat percentiles (msec): 00:22:38.249 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 48], 00:22:38.249 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 72], 00:22:38.249 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 106], 00:22:38.249 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 123], 00:22:38.250 | 99.99th=[ 123] 00:22:38.250 bw ( KiB/s): min= 660, max= 1544, per=4.12%, avg=927.40, stdev=167.53, samples=20 00:22:38.250 iops : min= 165, max= 386, avg=231.85, stdev=41.88, samples=20 00:22:38.250 lat (msec) : 20=0.60%, 50=22.01%, 100=71.69%, 250=5.70% 00:22:38.250 cpu : usr=31.86%, sys=1.43%, ctx=863, majf=0, minf=9 00:22:38.250 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.0%, 16=16.5%, 32=0.0%, >=64=0.0% 00:22:38.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.250 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.250 issued rwts: total=2335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.250 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.250 filename2: (groupid=0, jobs=1): err= 0: pid=83607: Tue Nov 19 09:50:23 2024 00:22:38.250 read: IOPS=232, BW=932KiB/s (954kB/s)(9352KiB/10036msec) 00:22:38.250 slat (usec): min=7, max=8046, avg=28.02, stdev=331.38 00:22:38.250 clat (msec): min=19, max=122, avg=68.49, stdev=18.44 00:22:38.250 lat (msec): min=19, max=122, avg=68.52, stdev=18.46 00:22:38.250 clat percentiles (msec): 00:22:38.250 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 48], 00:22:38.250 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:22:38.250 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 101], 00:22:38.250 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:22:38.250 | 99.99th=[ 123] 00:22:38.250 bw ( KiB/s): min= 688, max= 1293, per=4.14%, avg=930.75, stdev=118.61, samples=20 00:22:38.250 iops : min= 172, max= 323, avg=232.65, stdev=29.60, samples=20 00:22:38.250 lat (msec) : 20=0.09%, 50=24.64%, 100=70.23%, 250=5.05% 00:22:38.250 cpu : usr=31.48%, sys=1.57%, ctx=859, majf=0, minf=9 00:22:38.250 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:22:38.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.250 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.250 issued rwts: total=2338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.250 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.250 filename2: (groupid=0, jobs=1): err= 0: pid=83608: Tue Nov 19 09:50:23 2024 00:22:38.250 read: IOPS=235, BW=943KiB/s (966kB/s)(9456KiB/10026msec) 00:22:38.250 slat (usec): min=5, max=8028, avg=23.44, stdev=247.18 00:22:38.250 clat (msec): min=19, max=120, avg=67.71, stdev=18.10 00:22:38.250 lat (msec): min=19, max=120, avg=67.73, stdev=18.10 00:22:38.250 clat percentiles (msec): 00:22:38.250 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 48], 00:22:38.250 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:22:38.250 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 89], 95.00th=[ 105], 00:22:38.250 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:22:38.250 | 99.99th=[ 121] 00:22:38.250 bw ( KiB/s): min= 712, max= 1152, per=4.17%, avg=937.26, stdev=97.52, samples=19 00:22:38.250 iops : min= 178, max= 288, avg=234.32, stdev=24.38, samples=19 00:22:38.250 lat (msec) : 20=0.08%, 50=24.92%, 100=69.63%, 250=5.37% 00:22:38.250 cpu : usr=34.34%, sys=1.76%, ctx=930, majf=0, minf=9 00:22:38.250 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.5%, 16=15.5%, 32=0.0%, >=64=0.0% 00:22:38.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.250 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.250 issued rwts: total=2364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.250 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.250 filename2: (groupid=0, jobs=1): err= 0: pid=83609: Tue Nov 19 09:50:23 2024 00:22:38.250 read: IOPS=237, BW=949KiB/s (972kB/s)(9512KiB/10020msec) 00:22:38.250 slat (usec): min=4, max=8037, avg=27.26, stdev=261.24 00:22:38.250 clat (msec): min=19, max=145, avg=67.27, stdev=18.72 00:22:38.250 lat (msec): min=19, max=145, avg=67.30, stdev=18.72 00:22:38.250 clat percentiles (msec): 00:22:38.250 | 1.00th=[ 32], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 49], 00:22:38.250 | 30.00th=[ 54], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 72], 00:22:38.250 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 104], 00:22:38.250 | 99.00th=[ 117], 99.50th=[ 123], 99.90th=[ 126], 99.95th=[ 126], 00:22:38.250 | 99.99th=[ 146] 00:22:38.250 bw ( KiB/s): min= 688, max= 1152, per=4.20%, avg=943.16, stdev=105.89, samples=19 00:22:38.250 iops : min= 172, max= 288, avg=235.79, stdev=26.47, samples=19 00:22:38.250 lat (msec) : 20=0.08%, 50=23.38%, 100=70.94%, 250=5.59% 00:22:38.250 cpu : usr=40.25%, sys=1.89%, ctx=1230, majf=0, minf=9 00:22:38.250 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:22:38.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.250 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.250 issued rwts: total=2378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.250 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.250 filename2: (groupid=0, jobs=1): err= 0: pid=83610: Tue Nov 19 09:50:23 2024 00:22:38.250 read: IOPS=234, BW=940KiB/s (962kB/s)(9424KiB/10030msec) 00:22:38.250 slat (usec): min=4, max=8029, avg=24.90, stdev=248.54 00:22:38.250 clat (msec): min=33, max=131, avg=67.95, stdev=18.00 00:22:38.250 lat (msec): min=33, max=132, avg=67.98, stdev=18.00 00:22:38.250 clat percentiles (msec): 00:22:38.250 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 49], 00:22:38.250 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:22:38.250 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 105], 00:22:38.250 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:22:38.250 | 99.99th=[ 132] 00:22:38.250 bw ( KiB/s): min= 688, max= 1192, per=4.17%, avg=938.80, stdev=100.16, samples=20 00:22:38.250 iops : min= 172, max= 298, avg=234.70, stdev=25.04, samples=20 00:22:38.250 lat (msec) : 50=22.75%, 100=71.52%, 250=5.73% 00:22:38.250 cpu : usr=37.06%, sys=1.81%, ctx=1046, majf=0, minf=9 00:22:38.250 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:22:38.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.250 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.250 issued rwts: total=2356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.250 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.250 filename2: (groupid=0, jobs=1): err= 0: pid=83611: Tue Nov 19 09:50:23 2024 00:22:38.250 read: IOPS=228, BW=915KiB/s (937kB/s)(9180KiB/10033msec) 00:22:38.250 slat (usec): min=4, max=8026, avg=24.14, stdev=258.71 00:22:38.250 clat (msec): min=21, max=143, avg=69.76, stdev=19.12 00:22:38.250 lat (msec): min=21, max=143, avg=69.78, stdev=19.12 00:22:38.250 clat percentiles (msec): 00:22:38.250 | 1.00th=[ 27], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 51], 00:22:38.250 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:22:38.250 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 106], 00:22:38.250 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 138], 99.95th=[ 144], 00:22:38.250 | 99.99th=[ 144] 00:22:38.250 bw ( KiB/s): min= 632, max= 1280, per=4.07%, avg=914.40, stdev=122.93, samples=20 00:22:38.250 iops : min= 158, max= 320, avg=228.60, stdev=30.73, samples=20 00:22:38.250 lat (msec) : 50=18.78%, 100=75.56%, 250=5.66% 00:22:38.250 cpu : usr=38.94%, sys=2.04%, ctx=1151, majf=0, minf=9 00:22:38.250 IO depths : 1=0.1%, 2=0.9%, 4=3.2%, 8=79.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:22:38.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.250 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.250 issued rwts: total=2295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.250 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:38.250 00:22:38.250 Run status group 0 (all jobs): 00:22:38.250 READ: bw=21.9MiB/s (23.0MB/s), 861KiB/s-986KiB/s (882kB/s-1009kB/s), io=221MiB (232MB), run=10001-10060msec 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.250 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.251 bdev_null0 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.251 [2024-11-19 09:50:24.274016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.251 bdev_null1 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.251 { 00:22:38.251 "params": { 00:22:38.251 "name": "Nvme$subsystem", 00:22:38.251 "trtype": "$TEST_TRANSPORT", 00:22:38.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.251 "adrfam": "ipv4", 00:22:38.251 "trsvcid": "$NVMF_PORT", 00:22:38.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.251 "hdgst": ${hdgst:-false}, 00:22:38.251 "ddgst": ${ddgst:-false} 00:22:38.251 }, 00:22:38.251 "method": "bdev_nvme_attach_controller" 00:22:38.251 } 00:22:38.251 EOF 00:22:38.251 )") 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.251 { 00:22:38.251 "params": { 00:22:38.251 "name": "Nvme$subsystem", 00:22:38.251 "trtype": "$TEST_TRANSPORT", 00:22:38.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.251 "adrfam": "ipv4", 00:22:38.251 "trsvcid": "$NVMF_PORT", 00:22:38.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.251 "hdgst": ${hdgst:-false}, 00:22:38.251 "ddgst": ${ddgst:-false} 00:22:38.251 }, 00:22:38.251 "method": "bdev_nvme_attach_controller" 00:22:38.251 } 00:22:38.251 EOF 00:22:38.251 )") 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:38.251 "params": { 00:22:38.251 "name": "Nvme0", 00:22:38.251 "trtype": "tcp", 00:22:38.251 "traddr": "10.0.0.3", 00:22:38.251 "adrfam": "ipv4", 00:22:38.251 "trsvcid": "4420", 00:22:38.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:38.251 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:38.251 "hdgst": false, 00:22:38.251 "ddgst": false 00:22:38.251 }, 00:22:38.251 "method": "bdev_nvme_attach_controller" 00:22:38.251 },{ 00:22:38.251 "params": { 00:22:38.251 "name": "Nvme1", 00:22:38.251 "trtype": "tcp", 00:22:38.251 "traddr": "10.0.0.3", 00:22:38.251 "adrfam": "ipv4", 00:22:38.251 "trsvcid": "4420", 00:22:38.251 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.251 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.251 "hdgst": false, 00:22:38.251 "ddgst": false 00:22:38.251 }, 00:22:38.251 "method": "bdev_nvme_attach_controller" 00:22:38.251 }' 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.251 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.252 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:38.252 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:38.252 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:38.252 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:38.252 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:38.252 09:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:38.252 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:38.252 ... 00:22:38.252 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:38.252 ... 00:22:38.252 fio-3.35 00:22:38.252 Starting 4 threads 00:22:43.519 00:22:43.519 filename0: (groupid=0, jobs=1): err= 0: pid=83754: Tue Nov 19 09:50:30 2024 00:22:43.519 read: IOPS=1973, BW=15.4MiB/s (16.2MB/s)(77.1MiB/5001msec) 00:22:43.519 slat (nsec): min=4098, max=73283, avg=15096.63, stdev=6674.82 00:22:43.519 clat (usec): min=870, max=7752, avg=4005.57, stdev=916.77 00:22:43.519 lat (usec): min=879, max=7765, avg=4020.66, stdev=916.84 00:22:43.519 clat percentiles (usec): 00:22:43.519 | 1.00th=[ 1876], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 3294], 00:22:43.519 | 30.00th=[ 3851], 40.00th=[ 4015], 50.00th=[ 4228], 60.00th=[ 4293], 00:22:43.519 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5014], 95.00th=[ 5211], 00:22:43.519 | 99.00th=[ 5669], 99.50th=[ 6259], 99.90th=[ 6587], 99.95th=[ 6587], 00:22:43.519 | 99.99th=[ 7767] 00:22:43.519 bw ( KiB/s): min=13440, max=18816, per=24.26%, avg=15715.56, stdev=1913.74, samples=9 00:22:43.519 iops : min= 1680, max= 2352, avg=1964.44, stdev=239.22, samples=9 00:22:43.519 lat (usec) : 1000=0.02% 00:22:43.519 lat (msec) : 2=1.67%, 4=37.57%, 10=60.74% 00:22:43.519 cpu : usr=91.94%, sys=7.12%, ctx=85, majf=0, minf=0 00:22:43.519 IO depths : 1=0.1%, 2=12.7%, 4=57.4%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:43.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.519 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.519 issued rwts: total=9870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.519 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:43.519 filename0: (groupid=0, jobs=1): err= 0: pid=83755: Tue Nov 19 09:50:30 2024 00:22:43.519 read: IOPS=1866, BW=14.6MiB/s (15.3MB/s)(72.9MiB/5001msec) 00:22:43.519 slat (nsec): min=3814, max=87153, avg=18139.71, stdev=7374.91 00:22:43.520 clat (usec): min=216, max=7782, avg=4221.56, stdev=734.78 00:22:43.520 lat (usec): min=237, max=7796, avg=4239.70, stdev=734.87 00:22:43.520 clat percentiles (usec): 00:22:43.520 | 1.00th=[ 1975], 5.00th=[ 2606], 10.00th=[ 3228], 20.00th=[ 3851], 00:22:43.520 | 30.00th=[ 4015], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4555], 00:22:43.520 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4948], 95.00th=[ 5211], 00:22:43.520 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 6325], 99.95th=[ 6587], 00:22:43.520 | 99.99th=[ 7767] 00:22:43.520 bw ( KiB/s): min=13312, max=16880, per=23.02%, avg=14910.22, stdev=1273.67, samples=9 00:22:43.520 iops : min= 1664, max= 2110, avg=1863.78, stdev=159.21, samples=9 00:22:43.520 lat (usec) : 250=0.01% 00:22:43.520 lat (msec) : 2=1.22%, 4=28.53%, 10=70.24% 00:22:43.520 cpu : usr=92.08%, sys=6.94%, ctx=8, majf=0, minf=0 00:22:43.520 IO depths : 1=0.1%, 2=17.1%, 4=55.1%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:43.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.520 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.520 issued rwts: total=9335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.520 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:43.520 filename1: (groupid=0, jobs=1): err= 0: pid=83756: Tue Nov 19 09:50:30 2024 00:22:43.520 read: IOPS=2059, BW=16.1MiB/s (16.9MB/s)(80.5MiB/5004msec) 00:22:43.520 slat (nsec): min=5188, max=91879, avg=15520.07, stdev=8209.99 00:22:43.520 clat (usec): min=755, max=7083, avg=3839.09, stdev=1022.90 00:22:43.520 lat (usec): min=764, max=7104, avg=3854.61, stdev=1025.07 00:22:43.520 clat percentiles (usec): 00:22:43.520 | 1.00th=[ 1172], 5.00th=[ 1467], 10.00th=[ 2212], 20.00th=[ 3032], 00:22:43.520 | 30.00th=[ 3523], 40.00th=[ 3884], 50.00th=[ 4047], 60.00th=[ 4293], 00:22:43.520 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 5014], 00:22:43.520 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 6259], 99.95th=[ 6390], 00:22:43.520 | 99.99th=[ 6718] 00:22:43.520 bw ( KiB/s): min=13312, max=19456, per=25.68%, avg=16631.11, stdev=2287.34, samples=9 00:22:43.520 iops : min= 1664, max= 2432, avg=2078.89, stdev=285.92, samples=9 00:22:43.520 lat (usec) : 1000=0.75% 00:22:43.520 lat (msec) : 2=7.42%, 4=39.01%, 10=52.82% 00:22:43.520 cpu : usr=91.67%, sys=7.44%, ctx=28, majf=0, minf=0 00:22:43.520 IO depths : 1=0.1%, 2=9.9%, 4=59.3%, 8=30.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:43.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.520 complete : 0=0.0%, 4=96.2%, 8=3.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.520 issued rwts: total=10305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.520 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:43.520 filename1: (groupid=0, jobs=1): err= 0: pid=83757: Tue Nov 19 09:50:30 2024 00:22:43.520 read: IOPS=2200, BW=17.2MiB/s (18.0MB/s)(86.0MiB/5002msec) 00:22:43.520 slat (nsec): min=7079, max=82016, avg=14518.94, stdev=7300.55 00:22:43.520 clat (usec): min=943, max=7678, avg=3597.19, stdev=1001.88 00:22:43.520 lat (usec): min=952, max=7702, avg=3611.71, stdev=1001.88 00:22:43.520 clat percentiles (usec): 00:22:43.520 | 1.00th=[ 1221], 5.00th=[ 1909], 10.00th=[ 2212], 20.00th=[ 2507], 00:22:43.520 | 30.00th=[ 3032], 40.00th=[ 3556], 50.00th=[ 3884], 60.00th=[ 4080], 00:22:43.520 | 70.00th=[ 4228], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 4948], 00:22:43.520 | 99.00th=[ 5342], 99.50th=[ 5604], 99.90th=[ 6587], 99.95th=[ 6915], 00:22:43.520 | 99.99th=[ 7635] 00:22:43.520 bw ( KiB/s): min=15552, max=19696, per=27.26%, avg=17658.67, stdev=1539.23, samples=9 00:22:43.520 iops : min= 1944, max= 2462, avg=2207.33, stdev=192.40, samples=9 00:22:43.520 lat (usec) : 1000=0.59% 00:22:43.520 lat (msec) : 2=5.56%, 4=50.51%, 10=43.34% 00:22:43.520 cpu : usr=92.20%, sys=6.78%, ctx=9, majf=0, minf=0 00:22:43.520 IO depths : 1=0.1%, 2=5.4%, 4=61.7%, 8=32.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:43.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.520 complete : 0=0.0%, 4=98.0%, 8=2.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.520 issued rwts: total=11006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.520 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:43.520 00:22:43.520 Run status group 0 (all jobs): 00:22:43.520 READ: bw=63.3MiB/s (66.3MB/s), 14.6MiB/s-17.2MiB/s (15.3MB/s-18.0MB/s), io=317MiB (332MB), run=5001-5004msec 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:43.520 ************************************ 00:22:43.520 END TEST fio_dif_rand_params 00:22:43.520 ************************************ 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.520 00:22:43.520 real 0m23.677s 00:22:43.520 user 2m3.496s 00:22:43.520 sys 0m8.292s 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.520 09:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:43.520 09:50:30 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:43.520 09:50:30 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:43.520 09:50:30 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.520 09:50:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:43.520 ************************************ 00:22:43.520 START TEST fio_dif_digest 00:22:43.520 ************************************ 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:43.520 bdev_null0 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:43.520 [2024-11-19 09:50:30.485549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:43.520 09:50:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.521 { 00:22:43.521 "params": { 00:22:43.521 "name": "Nvme$subsystem", 00:22:43.521 "trtype": "$TEST_TRANSPORT", 00:22:43.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.521 "adrfam": "ipv4", 00:22:43.521 "trsvcid": "$NVMF_PORT", 00:22:43.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.521 "hdgst": ${hdgst:-false}, 00:22:43.521 "ddgst": ${ddgst:-false} 00:22:43.521 }, 00:22:43.521 "method": "bdev_nvme_attach_controller" 00:22:43.521 } 00:22:43.521 EOF 00:22:43.521 )") 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:43.521 "params": { 00:22:43.521 "name": "Nvme0", 00:22:43.521 "trtype": "tcp", 00:22:43.521 "traddr": "10.0.0.3", 00:22:43.521 "adrfam": "ipv4", 00:22:43.521 "trsvcid": "4420", 00:22:43.521 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:43.521 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:43.521 "hdgst": true, 00:22:43.521 "ddgst": true 00:22:43.521 }, 00:22:43.521 "method": "bdev_nvme_attach_controller" 00:22:43.521 }' 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:43.521 09:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:43.521 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:43.521 ... 00:22:43.521 fio-3.35 00:22:43.521 Starting 3 threads 00:22:55.814 00:22:55.814 filename0: (groupid=0, jobs=1): err= 0: pid=83863: Tue Nov 19 09:50:41 2024 00:22:55.814 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(279MiB/10004msec) 00:22:55.814 slat (nsec): min=7142, max=50811, avg=10814.04, stdev=4508.48 00:22:55.814 clat (usec): min=4947, max=22921, avg=13400.75, stdev=763.71 00:22:55.814 lat (usec): min=4956, max=22935, avg=13411.56, stdev=763.63 00:22:55.814 clat percentiles (usec): 00:22:55.815 | 1.00th=[12649], 5.00th=[12780], 10.00th=[12780], 20.00th=[13042], 00:22:55.815 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13304], 00:22:55.815 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14091], 95.00th=[14222], 00:22:55.815 | 99.00th=[17433], 99.50th=[17695], 99.90th=[22938], 99.95th=[22938], 00:22:55.815 | 99.99th=[22938] 00:22:55.815 bw ( KiB/s): min=26112, max=29952, per=33.32%, avg=28574.68, stdev=908.61, samples=19 00:22:55.815 iops : min= 204, max= 234, avg=223.21, stdev= 7.11, samples=19 00:22:55.815 lat (msec) : 10=0.13%, 20=99.73%, 50=0.13% 00:22:55.815 cpu : usr=90.88%, sys=8.48%, ctx=17, majf=0, minf=0 00:22:55.815 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:55.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.815 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.815 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:55.815 filename0: (groupid=0, jobs=1): err= 0: pid=83864: Tue Nov 19 09:50:41 2024 00:22:55.815 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(279MiB/10008msec) 00:22:55.815 slat (nsec): min=7436, max=60269, avg=15014.25, stdev=4301.35 00:22:55.815 clat (usec): min=9329, max=22965, avg=13400.19, stdev=716.91 00:22:55.815 lat (usec): min=9343, max=22982, avg=13415.20, stdev=717.08 00:22:55.815 clat percentiles (usec): 00:22:55.815 | 1.00th=[12649], 5.00th=[12780], 10.00th=[12780], 20.00th=[13042], 00:22:55.815 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13304], 00:22:55.815 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14091], 95.00th=[14222], 00:22:55.815 | 99.00th=[17433], 99.50th=[17695], 99.90th=[22938], 99.95th=[22938], 00:22:55.815 | 99.99th=[22938] 00:22:55.815 bw ( KiB/s): min=26112, max=29952, per=33.28%, avg=28537.26, stdev=932.32, samples=19 00:22:55.815 iops : min= 204, max= 234, avg=222.95, stdev= 7.28, samples=19 00:22:55.815 lat (msec) : 10=0.13%, 20=99.73%, 50=0.13% 00:22:55.815 cpu : usr=91.61%, sys=7.83%, ctx=108, majf=0, minf=0 00:22:55.815 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:55.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.815 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.815 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:55.815 filename0: (groupid=0, jobs=1): err= 0: pid=83865: Tue Nov 19 09:50:41 2024 00:22:55.815 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(279MiB/10008msec) 00:22:55.815 slat (nsec): min=7777, max=82615, avg=14611.66, stdev=4399.25 00:22:55.815 clat (usec): min=9328, max=22980, avg=13401.51, stdev=716.83 00:22:55.815 lat (usec): min=9342, max=22993, avg=13416.13, stdev=717.14 00:22:55.815 clat percentiles (usec): 00:22:55.815 | 1.00th=[12649], 5.00th=[12780], 10.00th=[12780], 20.00th=[13042], 00:22:55.815 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13304], 00:22:55.815 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14091], 95.00th=[14222], 00:22:55.815 | 99.00th=[17433], 99.50th=[17695], 99.90th=[22938], 99.95th=[22938], 00:22:55.815 | 99.99th=[22938] 00:22:55.815 bw ( KiB/s): min=26112, max=29952, per=33.28%, avg=28537.26, stdev=932.32, samples=19 00:22:55.815 iops : min= 204, max= 234, avg=222.95, stdev= 7.28, samples=19 00:22:55.815 lat (msec) : 10=0.13%, 20=99.73%, 50=0.13% 00:22:55.815 cpu : usr=90.65%, sys=8.79%, ctx=6, majf=0, minf=0 00:22:55.815 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:55.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.815 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.815 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:55.815 00:22:55.815 Run status group 0 (all jobs): 00:22:55.815 READ: bw=83.7MiB/s (87.8MB/s), 27.9MiB/s-27.9MiB/s (29.3MB/s-29.3MB/s), io=838MiB (879MB), run=10004-10008msec 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:55.815 ************************************ 00:22:55.815 END TEST fio_dif_digest 00:22:55.815 ************************************ 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.815 00:22:55.815 real 0m11.070s 00:22:55.815 user 0m28.045s 00:22:55.815 sys 0m2.783s 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:55.815 09:50:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:55.815 09:50:41 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:55.815 09:50:41 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:22:55.815 09:50:41 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:55.815 09:50:41 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:22:55.815 09:50:41 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:55.815 09:50:41 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:22:55.815 09:50:41 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:55.815 09:50:41 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:55.815 rmmod nvme_tcp 00:22:55.815 rmmod nvme_fabrics 00:22:55.815 rmmod nvme_keyring 00:22:55.815 09:50:41 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:55.815 09:50:41 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:22:55.815 09:50:41 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:22:55.815 09:50:41 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83109 ']' 00:22:55.815 09:50:41 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83109 00:22:55.815 09:50:41 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83109 ']' 00:22:55.815 09:50:41 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83109 00:22:55.815 09:50:41 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:22:55.815 09:50:41 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.815 09:50:41 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83109 00:22:55.815 killing process with pid 83109 00:22:55.815 09:50:41 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:55.815 09:50:41 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:55.815 09:50:41 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83109' 00:22:55.815 09:50:41 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83109 00:22:55.815 09:50:41 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83109 00:22:55.815 09:50:41 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:22:55.815 09:50:41 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:55.815 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:55.815 Waiting for block devices as requested 00:22:55.815 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:55.815 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:55.815 09:50:42 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:55.815 09:50:42 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:55.815 09:50:42 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:22:55.815 09:50:42 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:22:55.815 09:50:42 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:55.815 09:50:42 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:22:55.815 09:50:42 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:55.815 09:50:42 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:55.815 09:50:42 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:55.815 09:50:42 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:55.815 09:50:42 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:55.815 09:50:42 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:55.816 09:50:42 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:55.816 09:50:42 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:55.816 09:50:42 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:55.816 09:50:42 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:55.816 09:50:42 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:55.816 09:50:42 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:55.816 09:50:42 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:55.816 09:50:42 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:55.816 09:50:42 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:55.816 09:50:42 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:55.816 09:50:42 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.816 09:50:42 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:55.816 09:50:42 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.816 09:50:42 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:22:55.816 00:22:55.816 real 1m0.571s 00:22:55.816 user 3m48.680s 00:22:55.816 sys 0m19.740s 00:22:55.816 09:50:42 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:55.816 ************************************ 00:22:55.816 END TEST nvmf_dif 00:22:55.816 ************************************ 00:22:55.816 09:50:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:55.816 09:50:42 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:55.816 09:50:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:55.816 09:50:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:55.816 09:50:42 -- common/autotest_common.sh@10 -- # set +x 00:22:55.816 ************************************ 00:22:55.816 START TEST nvmf_abort_qd_sizes 00:22:55.816 ************************************ 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:55.816 * Looking for test storage... 00:22:55.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.816 --rc genhtml_branch_coverage=1 00:22:55.816 --rc genhtml_function_coverage=1 00:22:55.816 --rc genhtml_legend=1 00:22:55.816 --rc geninfo_all_blocks=1 00:22:55.816 --rc geninfo_unexecuted_blocks=1 00:22:55.816 00:22:55.816 ' 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.816 --rc genhtml_branch_coverage=1 00:22:55.816 --rc genhtml_function_coverage=1 00:22:55.816 --rc genhtml_legend=1 00:22:55.816 --rc geninfo_all_blocks=1 00:22:55.816 --rc geninfo_unexecuted_blocks=1 00:22:55.816 00:22:55.816 ' 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.816 --rc genhtml_branch_coverage=1 00:22:55.816 --rc genhtml_function_coverage=1 00:22:55.816 --rc genhtml_legend=1 00:22:55.816 --rc geninfo_all_blocks=1 00:22:55.816 --rc geninfo_unexecuted_blocks=1 00:22:55.816 00:22:55.816 ' 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.816 --rc genhtml_branch_coverage=1 00:22:55.816 --rc genhtml_function_coverage=1 00:22:55.816 --rc genhtml_legend=1 00:22:55.816 --rc geninfo_all_blocks=1 00:22:55.816 --rc geninfo_unexecuted_blocks=1 00:22:55.816 00:22:55.816 ' 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.816 09:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.816 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:22:55.816 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:22:55.816 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.816 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.816 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:55.816 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.816 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:55.816 09:50:43 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:22:55.816 09:50:43 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.816 09:50:43 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.816 09:50:43 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:55.817 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:55.817 Cannot find device "nvmf_init_br" 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:55.817 Cannot find device "nvmf_init_br2" 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:55.817 Cannot find device "nvmf_tgt_br" 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:55.817 Cannot find device "nvmf_tgt_br2" 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:55.817 Cannot find device "nvmf_init_br" 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:55.817 Cannot find device "nvmf_init_br2" 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:55.817 Cannot find device "nvmf_tgt_br" 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:55.817 Cannot find device "nvmf_tgt_br2" 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:55.817 Cannot find device "nvmf_br" 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:55.817 Cannot find device "nvmf_init_if" 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:55.817 Cannot find device "nvmf_init_if2" 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:55.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:55.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:55.817 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:55.818 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:55.818 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:22:55.818 00:22:55.818 --- 10.0.0.3 ping statistics --- 00:22:55.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.818 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:55.818 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:55.818 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:22:55.818 00:22:55.818 --- 10.0.0.4 ping statistics --- 00:22:55.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.818 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:55.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:55.818 00:22:55.818 --- 10.0.0.1 ping statistics --- 00:22:55.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.818 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:55.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:22:55.818 00:22:55.818 --- 10.0.0.2 ping statistics --- 00:22:55.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.818 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:22:55.818 09:50:43 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:56.755 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:56.755 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:56.755 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84526 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84526 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84526 ']' 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.755 09:50:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:56.755 [2024-11-19 09:50:44.357820] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:22:56.755 [2024-11-19 09:50:44.357989] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.014 [2024-11-19 09:50:44.518606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.014 [2024-11-19 09:50:44.588198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.014 [2024-11-19 09:50:44.588299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.014 [2024-11-19 09:50:44.588325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.014 [2024-11-19 09:50:44.588336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.014 [2024-11-19 09:50:44.588346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.014 [2024-11-19 09:50:44.589629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.014 [2024-11-19 09:50:44.589776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.014 [2024-11-19 09:50:44.589911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.014 [2024-11-19 09:50:44.589912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.273 [2024-11-19 09:50:44.651434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:57.841 09:50:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.841 09:50:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:22:57.841 09:50:45 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:57.841 09:50:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:57.841 09:50:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:58.100 09:50:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:58.100 ************************************ 00:22:58.100 START TEST spdk_target_abort 00:22:58.100 ************************************ 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:58.101 spdk_targetn1 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:58.101 [2024-11-19 09:50:45.583366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:58.101 [2024-11-19 09:50:45.627648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:58.101 09:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:01.390 Initializing NVMe Controllers 00:23:01.390 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:01.390 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:01.390 Initialization complete. Launching workers. 00:23:01.390 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10455, failed: 0 00:23:01.390 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1050, failed to submit 9405 00:23:01.390 success 804, unsuccessful 246, failed 0 00:23:01.390 09:50:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:01.390 09:50:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:04.676 Initializing NVMe Controllers 00:23:04.676 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:04.676 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:04.676 Initialization complete. Launching workers. 00:23:04.676 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8967, failed: 0 00:23:04.676 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1159, failed to submit 7808 00:23:04.676 success 405, unsuccessful 754, failed 0 00:23:04.677 09:50:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:04.677 09:50:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:07.961 Initializing NVMe Controllers 00:23:07.961 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:07.961 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:07.961 Initialization complete. Launching workers. 00:23:07.961 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31835, failed: 0 00:23:07.961 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2336, failed to submit 29499 00:23:07.961 success 493, unsuccessful 1843, failed 0 00:23:07.962 09:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:07.962 09:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.962 09:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.962 09:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.962 09:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:07.962 09:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.962 09:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:08.529 09:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.529 09:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84526 00:23:08.529 09:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84526 ']' 00:23:08.529 09:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84526 00:23:08.529 09:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:23:08.529 09:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.529 09:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84526 00:23:08.529 09:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:08.529 killing process with pid 84526 00:23:08.529 09:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:08.529 09:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84526' 00:23:08.529 09:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84526 00:23:08.529 09:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84526 00:23:08.788 00:23:08.788 real 0m10.777s 00:23:08.788 user 0m44.196s 00:23:08.788 sys 0m2.216s 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:08.788 ************************************ 00:23:08.788 END TEST spdk_target_abort 00:23:08.788 ************************************ 00:23:08.788 09:50:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:08.788 09:50:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:08.788 09:50:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:08.788 09:50:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:08.788 ************************************ 00:23:08.788 START TEST kernel_target_abort 00:23:08.788 ************************************ 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:08.788 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:09.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:09.355 Waiting for block devices as requested 00:23:09.355 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:09.355 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:09.355 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:09.355 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:09.355 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:09.355 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:09.355 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:09.355 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:09.355 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:09.355 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:09.355 09:50:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:09.613 No valid GPT data, bailing 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:09.613 No valid GPT data, bailing 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:09.613 No valid GPT data, bailing 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:09.613 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:09.613 No valid GPT data, bailing 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c --hostid=9203ba0c-8506-4f0b-a886-a7f874c4694c -a 10.0.0.1 -t tcp -s 4420 00:23:09.871 00:23:09.871 Discovery Log Number of Records 2, Generation counter 2 00:23:09.871 =====Discovery Log Entry 0====== 00:23:09.871 trtype: tcp 00:23:09.871 adrfam: ipv4 00:23:09.871 subtype: current discovery subsystem 00:23:09.871 treq: not specified, sq flow control disable supported 00:23:09.871 portid: 1 00:23:09.871 trsvcid: 4420 00:23:09.871 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:09.871 traddr: 10.0.0.1 00:23:09.871 eflags: none 00:23:09.871 sectype: none 00:23:09.871 =====Discovery Log Entry 1====== 00:23:09.871 trtype: tcp 00:23:09.871 adrfam: ipv4 00:23:09.871 subtype: nvme subsystem 00:23:09.871 treq: not specified, sq flow control disable supported 00:23:09.871 portid: 1 00:23:09.871 trsvcid: 4420 00:23:09.871 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:09.871 traddr: 10.0.0.1 00:23:09.871 eflags: none 00:23:09.871 sectype: none 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:09.871 09:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:13.169 Initializing NVMe Controllers 00:23:13.169 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:13.169 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:13.169 Initialization complete. Launching workers. 00:23:13.169 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32733, failed: 0 00:23:13.169 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32733, failed to submit 0 00:23:13.169 success 0, unsuccessful 32733, failed 0 00:23:13.170 09:51:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:13.170 09:51:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:16.457 Initializing NVMe Controllers 00:23:16.457 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:16.457 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:16.457 Initialization complete. Launching workers. 00:23:16.457 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67126, failed: 0 00:23:16.457 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29093, failed to submit 38033 00:23:16.457 success 0, unsuccessful 29093, failed 0 00:23:16.457 09:51:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:16.457 09:51:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:19.752 Initializing NVMe Controllers 00:23:19.752 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:19.752 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:19.752 Initialization complete. Launching workers. 00:23:19.752 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81691, failed: 0 00:23:19.752 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20424, failed to submit 61267 00:23:19.752 success 0, unsuccessful 20424, failed 0 00:23:19.752 09:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:19.752 09:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:19.752 09:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:23:19.752 09:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:19.752 09:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:19.752 09:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:19.752 09:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:19.752 09:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:19.752 09:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:19.752 09:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:20.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:21.914 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:21.914 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:21.914 00:23:21.914 real 0m13.107s 00:23:21.914 user 0m6.135s 00:23:21.914 sys 0m4.418s 00:23:21.914 09:51:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.914 09:51:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:21.914 ************************************ 00:23:21.914 END TEST kernel_target_abort 00:23:21.914 ************************************ 00:23:21.914 09:51:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:21.914 09:51:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:21.914 09:51:09 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:21.914 09:51:09 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:23:21.914 09:51:09 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:21.914 09:51:09 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:23:21.914 09:51:09 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:21.914 09:51:09 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:21.914 rmmod nvme_tcp 00:23:22.174 rmmod nvme_fabrics 00:23:22.174 rmmod nvme_keyring 00:23:22.174 09:51:09 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:22.174 09:51:09 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:23:22.174 09:51:09 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:23:22.174 09:51:09 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84526 ']' 00:23:22.174 09:51:09 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84526 00:23:22.174 09:51:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84526 ']' 00:23:22.174 09:51:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84526 00:23:22.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84526) - No such process 00:23:22.174 09:51:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84526 is not found' 00:23:22.174 Process with pid 84526 is not found 00:23:22.174 09:51:09 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:23:22.174 09:51:09 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:22.432 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:22.432 Waiting for block devices as requested 00:23:22.432 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:22.689 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:22.689 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:22.949 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:22.949 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:22.949 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:22.949 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.949 09:51:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:22.949 09:51:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.949 09:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:23:22.949 00:23:22.949 real 0m27.608s 00:23:22.949 user 0m51.756s 00:23:22.949 sys 0m8.113s 00:23:22.949 09:51:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:22.949 ************************************ 00:23:22.949 END TEST nvmf_abort_qd_sizes 00:23:22.949 ************************************ 00:23:22.949 09:51:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:22.949 09:51:10 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:22.949 09:51:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:22.949 09:51:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.949 09:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:22.949 ************************************ 00:23:22.949 START TEST keyring_file 00:23:22.949 ************************************ 00:23:22.949 09:51:10 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:22.949 * Looking for test storage... 00:23:22.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:22.949 09:51:10 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:22.949 09:51:10 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:23:22.949 09:51:10 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:23.209 09:51:10 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@345 -- # : 1 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@353 -- # local d=1 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@355 -- # echo 1 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@353 -- # local d=2 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@355 -- # echo 2 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@368 -- # return 0 00:23:23.209 09:51:10 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.209 09:51:10 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:23.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.209 --rc genhtml_branch_coverage=1 00:23:23.209 --rc genhtml_function_coverage=1 00:23:23.209 --rc genhtml_legend=1 00:23:23.209 --rc geninfo_all_blocks=1 00:23:23.209 --rc geninfo_unexecuted_blocks=1 00:23:23.209 00:23:23.209 ' 00:23:23.209 09:51:10 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:23.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.209 --rc genhtml_branch_coverage=1 00:23:23.209 --rc genhtml_function_coverage=1 00:23:23.209 --rc genhtml_legend=1 00:23:23.209 --rc geninfo_all_blocks=1 00:23:23.209 --rc geninfo_unexecuted_blocks=1 00:23:23.209 00:23:23.209 ' 00:23:23.209 09:51:10 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:23.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.209 --rc genhtml_branch_coverage=1 00:23:23.209 --rc genhtml_function_coverage=1 00:23:23.209 --rc genhtml_legend=1 00:23:23.209 --rc geninfo_all_blocks=1 00:23:23.209 --rc geninfo_unexecuted_blocks=1 00:23:23.209 00:23:23.209 ' 00:23:23.209 09:51:10 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:23.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.209 --rc genhtml_branch_coverage=1 00:23:23.209 --rc genhtml_function_coverage=1 00:23:23.209 --rc genhtml_legend=1 00:23:23.209 --rc geninfo_all_blocks=1 00:23:23.209 --rc geninfo_unexecuted_blocks=1 00:23:23.209 00:23:23.209 ' 00:23:23.209 09:51:10 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:23.209 09:51:10 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.209 09:51:10 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.209 09:51:10 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.209 09:51:10 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.209 09:51:10 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.209 09:51:10 keyring_file -- paths/export.sh@5 -- # export PATH 00:23:23.209 09:51:10 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@51 -- # : 0 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.209 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.209 09:51:10 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:23.209 09:51:10 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:23.209 09:51:10 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:23.209 09:51:10 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:23.209 09:51:10 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:23.209 09:51:10 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:23.209 09:51:10 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:23.209 09:51:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:23.209 09:51:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:23.209 09:51:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:23.209 09:51:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:23.209 09:51:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:23.209 09:51:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sd8DDuHaSG 00:23:23.209 09:51:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:23.209 09:51:10 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:23:23.210 09:51:10 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:23.210 09:51:10 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:23.210 09:51:10 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:23:23.210 09:51:10 keyring_file -- nvmf/common.sh@733 -- # python - 00:23:23.210 09:51:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sd8DDuHaSG 00:23:23.210 09:51:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sd8DDuHaSG 00:23:23.210 09:51:10 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.sd8DDuHaSG 00:23:23.210 09:51:10 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:23.210 09:51:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:23.210 09:51:10 keyring_file -- keyring/common.sh@17 -- # name=key1 00:23:23.210 09:51:10 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:23.210 09:51:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:23.210 09:51:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:23.210 09:51:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.impRhT2pM5 00:23:23.210 09:51:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:23.210 09:51:10 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:23.210 09:51:10 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:23:23.210 09:51:10 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:23.210 09:51:10 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:23:23.210 09:51:10 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:23:23.210 09:51:10 keyring_file -- nvmf/common.sh@733 -- # python - 00:23:23.210 09:51:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.impRhT2pM5 00:23:23.210 09:51:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.impRhT2pM5 00:23:23.210 09:51:10 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.impRhT2pM5 00:23:23.210 09:51:10 keyring_file -- keyring/file.sh@30 -- # tgtpid=85440 00:23:23.210 09:51:10 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:23.210 09:51:10 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85440 00:23:23.210 09:51:10 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85440 ']' 00:23:23.210 09:51:10 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.210 09:51:10 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.210 09:51:10 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.210 09:51:10 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.210 09:51:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:23.468 [2024-11-19 09:51:10.858332] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:23.468 [2024-11-19 09:51:10.858450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85440 ] 00:23:23.468 [2024-11-19 09:51:11.008855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.468 [2024-11-19 09:51:11.072513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.727 [2024-11-19 09:51:11.148371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:23:23.985 09:51:11 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:23.985 [2024-11-19 09:51:11.362589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.985 null0 00:23:23.985 [2024-11-19 09:51:11.394558] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:23.985 [2024-11-19 09:51:11.394794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.985 09:51:11 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:23.985 [2024-11-19 09:51:11.422538] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:23:23.985 request: 00:23:23.985 { 00:23:23.985 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:23.985 "secure_channel": false, 00:23:23.985 "listen_address": { 00:23:23.985 "trtype": "tcp", 00:23:23.985 "traddr": "127.0.0.1", 00:23:23.985 "trsvcid": "4420" 00:23:23.985 }, 00:23:23.985 "method": "nvmf_subsystem_add_listener", 00:23:23.985 "req_id": 1 00:23:23.985 } 00:23:23.985 Got JSON-RPC error response 00:23:23.985 response: 00:23:23.985 { 00:23:23.985 "code": -32602, 00:23:23.985 "message": "Invalid parameters" 00:23:23.985 } 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:23.985 09:51:11 keyring_file -- keyring/file.sh@47 -- # bperfpid=85450 00:23:23.985 09:51:11 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85450 /var/tmp/bperf.sock 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85450 ']' 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.985 09:51:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:23.985 09:51:11 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:23.985 [2024-11-19 09:51:11.488321] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:23.985 [2024-11-19 09:51:11.488421] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85450 ] 00:23:24.244 [2024-11-19 09:51:11.638584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.244 [2024-11-19 09:51:11.702016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.244 [2024-11-19 09:51:11.761227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:24.244 09:51:11 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.244 09:51:11 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:23:24.244 09:51:11 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sd8DDuHaSG 00:23:24.244 09:51:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sd8DDuHaSG 00:23:24.502 09:51:12 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.impRhT2pM5 00:23:24.502 09:51:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.impRhT2pM5 00:23:25.069 09:51:12 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:23:25.069 09:51:12 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:23:25.069 09:51:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:25.069 09:51:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:25.069 09:51:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:25.328 09:51:12 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.sd8DDuHaSG == \/\t\m\p\/\t\m\p\.\s\d\8\D\D\u\H\a\S\G ]] 00:23:25.328 09:51:12 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:23:25.328 09:51:12 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:23:25.328 09:51:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:25.328 09:51:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:25.328 09:51:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:25.588 09:51:12 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.impRhT2pM5 == \/\t\m\p\/\t\m\p\.\i\m\p\R\h\T\2\p\M\5 ]] 00:23:25.588 09:51:13 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:23:25.588 09:51:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:25.588 09:51:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:25.588 09:51:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:25.588 09:51:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:25.588 09:51:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:25.848 09:51:13 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:25.848 09:51:13 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:23:25.848 09:51:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:25.848 09:51:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:25.848 09:51:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:25.848 09:51:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:25.848 09:51:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:26.107 09:51:13 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:23:26.107 09:51:13 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:26.107 09:51:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:26.368 [2024-11-19 09:51:13.836747] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.368 nvme0n1 00:23:26.368 09:51:13 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:23:26.368 09:51:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:26.368 09:51:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:26.368 09:51:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:26.368 09:51:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:26.368 09:51:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:26.626 09:51:14 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:23:26.626 09:51:14 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:23:26.626 09:51:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:26.626 09:51:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:26.626 09:51:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:26.626 09:51:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:26.626 09:51:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:26.893 09:51:14 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:23:26.893 09:51:14 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:27.152 Running I/O for 1 seconds... 00:23:28.089 11750.00 IOPS, 45.90 MiB/s 00:23:28.089 Latency(us) 00:23:28.089 [2024-11-19T09:51:15.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.089 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:28.089 nvme0n1 : 1.01 11797.76 46.08 0.00 0.00 10816.39 4974.78 17515.99 00:23:28.089 [2024-11-19T09:51:15.712Z] =================================================================================================================== 00:23:28.089 [2024-11-19T09:51:15.712Z] Total : 11797.76 46.08 0.00 0.00 10816.39 4974.78 17515.99 00:23:28.089 { 00:23:28.089 "results": [ 00:23:28.089 { 00:23:28.089 "job": "nvme0n1", 00:23:28.089 "core_mask": "0x2", 00:23:28.089 "workload": "randrw", 00:23:28.089 "percentage": 50, 00:23:28.089 "status": "finished", 00:23:28.089 "queue_depth": 128, 00:23:28.089 "io_size": 4096, 00:23:28.089 "runtime": 1.006971, 00:23:28.089 "iops": 11797.757830165914, 00:23:28.089 "mibps": 46.0849915240856, 00:23:28.089 "io_failed": 0, 00:23:28.089 "io_timeout": 0, 00:23:28.089 "avg_latency_us": 10816.391326599325, 00:23:28.089 "min_latency_us": 4974.778181818182, 00:23:28.089 "max_latency_us": 17515.985454545455 00:23:28.089 } 00:23:28.089 ], 00:23:28.089 "core_count": 1 00:23:28.089 } 00:23:28.089 09:51:15 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:28.089 09:51:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:28.348 09:51:15 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:23:28.348 09:51:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:28.348 09:51:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:28.348 09:51:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:28.348 09:51:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:28.348 09:51:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:28.606 09:51:16 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:28.606 09:51:16 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:23:28.606 09:51:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:28.606 09:51:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:28.606 09:51:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:28.606 09:51:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:28.606 09:51:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:28.865 09:51:16 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:23:28.865 09:51:16 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:28.865 09:51:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:23:28.865 09:51:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:28.865 09:51:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:23:28.865 09:51:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.865 09:51:16 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:23:28.865 09:51:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.865 09:51:16 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:28.865 09:51:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:29.432 [2024-11-19 09:51:16.759212] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:29.432 [2024-11-19 09:51:16.759541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1423770 (107): Transport endpoint is not connected 00:23:29.432 [2024-11-19 09:51:16.760531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1423770 (9): Bad file descriptor 00:23:29.432 [2024-11-19 09:51:16.761528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:23:29.432 [2024-11-19 09:51:16.761548] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:29.432 [2024-11-19 09:51:16.761558] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:23:29.432 [2024-11-19 09:51:16.761569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:23:29.432 request: 00:23:29.432 { 00:23:29.432 "name": "nvme0", 00:23:29.432 "trtype": "tcp", 00:23:29.432 "traddr": "127.0.0.1", 00:23:29.432 "adrfam": "ipv4", 00:23:29.432 "trsvcid": "4420", 00:23:29.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:29.432 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:29.432 "prchk_reftag": false, 00:23:29.432 "prchk_guard": false, 00:23:29.432 "hdgst": false, 00:23:29.432 "ddgst": false, 00:23:29.432 "psk": "key1", 00:23:29.432 "allow_unrecognized_csi": false, 00:23:29.432 "method": "bdev_nvme_attach_controller", 00:23:29.432 "req_id": 1 00:23:29.432 } 00:23:29.432 Got JSON-RPC error response 00:23:29.432 response: 00:23:29.432 { 00:23:29.432 "code": -5, 00:23:29.432 "message": "Input/output error" 00:23:29.432 } 00:23:29.432 09:51:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:23:29.432 09:51:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:29.432 09:51:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:29.432 09:51:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:29.432 09:51:16 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:23:29.432 09:51:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:29.432 09:51:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:29.432 09:51:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:29.432 09:51:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:29.432 09:51:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:29.432 09:51:17 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:29.432 09:51:17 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:23:29.432 09:51:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:29.432 09:51:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:29.432 09:51:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:29.432 09:51:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:29.433 09:51:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:29.691 09:51:17 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:23:29.691 09:51:17 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:23:29.691 09:51:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:30.258 09:51:17 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:23:30.258 09:51:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:30.517 09:51:17 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:23:30.517 09:51:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:30.517 09:51:17 keyring_file -- keyring/file.sh@78 -- # jq length 00:23:30.775 09:51:18 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:23:30.775 09:51:18 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.sd8DDuHaSG 00:23:30.775 09:51:18 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.sd8DDuHaSG 00:23:30.775 09:51:18 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:23:30.775 09:51:18 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.sd8DDuHaSG 00:23:30.775 09:51:18 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:23:30.775 09:51:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:30.775 09:51:18 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:23:30.775 09:51:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:30.775 09:51:18 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sd8DDuHaSG 00:23:30.775 09:51:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sd8DDuHaSG 00:23:31.034 [2024-11-19 09:51:18.471899] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.sd8DDuHaSG': 0100660 00:23:31.034 [2024-11-19 09:51:18.471964] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:31.034 request: 00:23:31.034 { 00:23:31.034 "name": "key0", 00:23:31.034 "path": "/tmp/tmp.sd8DDuHaSG", 00:23:31.034 "method": "keyring_file_add_key", 00:23:31.034 "req_id": 1 00:23:31.034 } 00:23:31.034 Got JSON-RPC error response 00:23:31.034 response: 00:23:31.034 { 00:23:31.034 "code": -1, 00:23:31.034 "message": "Operation not permitted" 00:23:31.034 } 00:23:31.034 09:51:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:23:31.034 09:51:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:31.034 09:51:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:31.034 09:51:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:31.034 09:51:18 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.sd8DDuHaSG 00:23:31.034 09:51:18 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sd8DDuHaSG 00:23:31.034 09:51:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sd8DDuHaSG 00:23:31.293 09:51:18 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.sd8DDuHaSG 00:23:31.293 09:51:18 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:23:31.293 09:51:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:31.293 09:51:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:31.293 09:51:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:31.293 09:51:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:31.293 09:51:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:31.551 09:51:19 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:23:31.551 09:51:19 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:31.551 09:51:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:23:31.551 09:51:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:31.551 09:51:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:23:31.551 09:51:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.551 09:51:19 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:23:31.551 09:51:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.551 09:51:19 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:31.551 09:51:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:31.809 [2024-11-19 09:51:19.276144] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.sd8DDuHaSG': No such file or directory 00:23:31.810 [2024-11-19 09:51:19.276201] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:31.810 [2024-11-19 09:51:19.276254] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:31.810 [2024-11-19 09:51:19.276267] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:23:31.810 [2024-11-19 09:51:19.276277] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:31.810 [2024-11-19 09:51:19.276286] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:31.810 request: 00:23:31.810 { 00:23:31.810 "name": "nvme0", 00:23:31.810 "trtype": "tcp", 00:23:31.810 "traddr": "127.0.0.1", 00:23:31.810 "adrfam": "ipv4", 00:23:31.810 "trsvcid": "4420", 00:23:31.810 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:31.810 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:31.810 "prchk_reftag": false, 00:23:31.810 "prchk_guard": false, 00:23:31.810 "hdgst": false, 00:23:31.810 "ddgst": false, 00:23:31.810 "psk": "key0", 00:23:31.810 "allow_unrecognized_csi": false, 00:23:31.810 "method": "bdev_nvme_attach_controller", 00:23:31.810 "req_id": 1 00:23:31.810 } 00:23:31.810 Got JSON-RPC error response 00:23:31.810 response: 00:23:31.810 { 00:23:31.810 "code": -19, 00:23:31.810 "message": "No such device" 00:23:31.810 } 00:23:31.810 09:51:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:23:31.810 09:51:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:31.810 09:51:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:31.810 09:51:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:31.810 09:51:19 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:23:31.810 09:51:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:32.068 09:51:19 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:32.068 09:51:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:32.068 09:51:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:32.068 09:51:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:32.068 09:51:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:32.068 09:51:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:32.068 09:51:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gAuTNCgNTe 00:23:32.068 09:51:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:32.069 09:51:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:32.069 09:51:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:23:32.069 09:51:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:32.069 09:51:19 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:32.069 09:51:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:23:32.069 09:51:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:23:32.069 09:51:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gAuTNCgNTe 00:23:32.069 09:51:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gAuTNCgNTe 00:23:32.069 09:51:19 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.gAuTNCgNTe 00:23:32.069 09:51:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gAuTNCgNTe 00:23:32.069 09:51:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gAuTNCgNTe 00:23:32.327 09:51:19 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:32.327 09:51:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:32.894 nvme0n1 00:23:32.894 09:51:20 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:23:32.894 09:51:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:32.894 09:51:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:32.894 09:51:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:32.894 09:51:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:32.894 09:51:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:33.153 09:51:20 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:23:33.153 09:51:20 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:23:33.153 09:51:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:33.412 09:51:20 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:23:33.412 09:51:20 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:23:33.412 09:51:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:33.412 09:51:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:33.412 09:51:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:33.670 09:51:21 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:23:33.670 09:51:21 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:23:33.670 09:51:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:33.670 09:51:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:33.670 09:51:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:33.670 09:51:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:33.670 09:51:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:33.929 09:51:21 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:23:33.929 09:51:21 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:33.929 09:51:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:34.188 09:51:21 keyring_file -- keyring/file.sh@105 -- # jq length 00:23:34.188 09:51:21 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:23:34.188 09:51:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:34.447 09:51:22 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:23:34.447 09:51:22 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gAuTNCgNTe 00:23:34.447 09:51:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gAuTNCgNTe 00:23:34.706 09:51:22 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.impRhT2pM5 00:23:34.706 09:51:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.impRhT2pM5 00:23:34.965 09:51:22 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:34.965 09:51:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:35.224 nvme0n1 00:23:35.224 09:51:22 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:23:35.224 09:51:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:35.794 09:51:23 keyring_file -- keyring/file.sh@113 -- # config='{ 00:23:35.794 "subsystems": [ 00:23:35.794 { 00:23:35.794 "subsystem": "keyring", 00:23:35.794 "config": [ 00:23:35.794 { 00:23:35.794 "method": "keyring_file_add_key", 00:23:35.794 "params": { 00:23:35.794 "name": "key0", 00:23:35.794 "path": "/tmp/tmp.gAuTNCgNTe" 00:23:35.794 } 00:23:35.794 }, 00:23:35.794 { 00:23:35.794 "method": "keyring_file_add_key", 00:23:35.794 "params": { 00:23:35.794 "name": "key1", 00:23:35.794 "path": "/tmp/tmp.impRhT2pM5" 00:23:35.794 } 00:23:35.794 } 00:23:35.794 ] 00:23:35.794 }, 00:23:35.794 { 00:23:35.794 "subsystem": "iobuf", 00:23:35.794 "config": [ 00:23:35.794 { 00:23:35.794 "method": "iobuf_set_options", 00:23:35.794 "params": { 00:23:35.794 "small_pool_count": 8192, 00:23:35.794 "large_pool_count": 1024, 00:23:35.794 "small_bufsize": 8192, 00:23:35.794 "large_bufsize": 135168, 00:23:35.794 "enable_numa": false 00:23:35.794 } 00:23:35.794 } 00:23:35.794 ] 00:23:35.794 }, 00:23:35.794 { 00:23:35.794 "subsystem": "sock", 00:23:35.794 "config": [ 00:23:35.794 { 00:23:35.794 "method": "sock_set_default_impl", 00:23:35.794 "params": { 00:23:35.794 "impl_name": "uring" 00:23:35.794 } 00:23:35.794 }, 00:23:35.794 { 00:23:35.794 "method": "sock_impl_set_options", 00:23:35.794 "params": { 00:23:35.794 "impl_name": "ssl", 00:23:35.794 "recv_buf_size": 4096, 00:23:35.794 "send_buf_size": 4096, 00:23:35.794 "enable_recv_pipe": true, 00:23:35.794 "enable_quickack": false, 00:23:35.794 "enable_placement_id": 0, 00:23:35.794 "enable_zerocopy_send_server": true, 00:23:35.794 "enable_zerocopy_send_client": false, 00:23:35.794 "zerocopy_threshold": 0, 00:23:35.794 "tls_version": 0, 00:23:35.794 "enable_ktls": false 00:23:35.794 } 00:23:35.794 }, 00:23:35.794 { 00:23:35.794 "method": "sock_impl_set_options", 00:23:35.794 "params": { 00:23:35.794 "impl_name": "posix", 00:23:35.794 "recv_buf_size": 2097152, 00:23:35.794 "send_buf_size": 2097152, 00:23:35.794 "enable_recv_pipe": true, 00:23:35.794 "enable_quickack": false, 00:23:35.794 "enable_placement_id": 0, 00:23:35.794 "enable_zerocopy_send_server": true, 00:23:35.794 "enable_zerocopy_send_client": false, 00:23:35.794 "zerocopy_threshold": 0, 00:23:35.794 "tls_version": 0, 00:23:35.794 "enable_ktls": false 00:23:35.794 } 00:23:35.794 }, 00:23:35.794 { 00:23:35.794 "method": "sock_impl_set_options", 00:23:35.794 "params": { 00:23:35.794 "impl_name": "uring", 00:23:35.794 "recv_buf_size": 2097152, 00:23:35.794 "send_buf_size": 2097152, 00:23:35.794 "enable_recv_pipe": true, 00:23:35.794 "enable_quickack": false, 00:23:35.794 "enable_placement_id": 0, 00:23:35.794 "enable_zerocopy_send_server": false, 00:23:35.794 "enable_zerocopy_send_client": false, 00:23:35.794 "zerocopy_threshold": 0, 00:23:35.794 "tls_version": 0, 00:23:35.794 "enable_ktls": false 00:23:35.794 } 00:23:35.794 } 00:23:35.794 ] 00:23:35.794 }, 00:23:35.794 { 00:23:35.794 "subsystem": "vmd", 00:23:35.794 "config": [] 00:23:35.794 }, 00:23:35.794 { 00:23:35.794 "subsystem": "accel", 00:23:35.794 "config": [ 00:23:35.794 { 00:23:35.794 "method": "accel_set_options", 00:23:35.794 "params": { 00:23:35.794 "small_cache_size": 128, 00:23:35.794 "large_cache_size": 16, 00:23:35.794 "task_count": 2048, 00:23:35.794 "sequence_count": 2048, 00:23:35.794 "buf_count": 2048 00:23:35.794 } 00:23:35.794 } 00:23:35.794 ] 00:23:35.794 }, 00:23:35.794 { 00:23:35.794 "subsystem": "bdev", 00:23:35.794 "config": [ 00:23:35.794 { 00:23:35.794 "method": "bdev_set_options", 00:23:35.794 "params": { 00:23:35.794 "bdev_io_pool_size": 65535, 00:23:35.794 "bdev_io_cache_size": 256, 00:23:35.794 "bdev_auto_examine": true, 00:23:35.794 "iobuf_small_cache_size": 128, 00:23:35.794 "iobuf_large_cache_size": 16 00:23:35.794 } 00:23:35.794 }, 00:23:35.794 { 00:23:35.794 "method": "bdev_raid_set_options", 00:23:35.794 "params": { 00:23:35.794 "process_window_size_kb": 1024, 00:23:35.794 "process_max_bandwidth_mb_sec": 0 00:23:35.794 } 00:23:35.794 }, 00:23:35.794 { 00:23:35.794 "method": "bdev_iscsi_set_options", 00:23:35.794 "params": { 00:23:35.794 "timeout_sec": 30 00:23:35.794 } 00:23:35.794 }, 00:23:35.794 { 00:23:35.794 "method": "bdev_nvme_set_options", 00:23:35.794 "params": { 00:23:35.794 "action_on_timeout": "none", 00:23:35.794 "timeout_us": 0, 00:23:35.794 "timeout_admin_us": 0, 00:23:35.794 "keep_alive_timeout_ms": 10000, 00:23:35.794 "arbitration_burst": 0, 00:23:35.794 "low_priority_weight": 0, 00:23:35.794 "medium_priority_weight": 0, 00:23:35.794 "high_priority_weight": 0, 00:23:35.794 "nvme_adminq_poll_period_us": 10000, 00:23:35.794 "nvme_ioq_poll_period_us": 0, 00:23:35.794 "io_queue_requests": 512, 00:23:35.794 "delay_cmd_submit": true, 00:23:35.794 "transport_retry_count": 4, 00:23:35.794 "bdev_retry_count": 3, 00:23:35.794 "transport_ack_timeout": 0, 00:23:35.794 "ctrlr_loss_timeout_sec": 0, 00:23:35.794 "reconnect_delay_sec": 0, 00:23:35.794 "fast_io_fail_timeout_sec": 0, 00:23:35.794 "disable_auto_failback": false, 00:23:35.794 "generate_uuids": false, 00:23:35.794 "transport_tos": 0, 00:23:35.794 "nvme_error_stat": false, 00:23:35.794 "rdma_srq_size": 0, 00:23:35.794 "io_path_stat": false, 00:23:35.794 "allow_accel_sequence": false, 00:23:35.794 "rdma_max_cq_size": 0, 00:23:35.794 "rdma_cm_event_timeout_ms": 0, 00:23:35.794 "dhchap_digests": [ 00:23:35.794 "sha256", 00:23:35.794 "sha384", 00:23:35.794 "sha512" 00:23:35.794 ], 00:23:35.794 "dhchap_dhgroups": [ 00:23:35.794 "null", 00:23:35.794 "ffdhe2048", 00:23:35.794 "ffdhe3072", 00:23:35.794 "ffdhe4096", 00:23:35.794 "ffdhe6144", 00:23:35.794 "ffdhe8192" 00:23:35.794 ] 00:23:35.794 } 00:23:35.794 }, 00:23:35.794 { 00:23:35.794 "method": "bdev_nvme_attach_controller", 00:23:35.794 "params": { 00:23:35.794 "name": "nvme0", 00:23:35.794 "trtype": "TCP", 00:23:35.794 "adrfam": "IPv4", 00:23:35.794 "traddr": "127.0.0.1", 00:23:35.794 "trsvcid": "4420", 00:23:35.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.794 "prchk_reftag": false, 00:23:35.794 "prchk_guard": false, 00:23:35.795 "ctrlr_loss_timeout_sec": 0, 00:23:35.795 "reconnect_delay_sec": 0, 00:23:35.795 "fast_io_fail_timeout_sec": 0, 00:23:35.795 "psk": "key0", 00:23:35.795 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:35.795 "hdgst": false, 00:23:35.795 "ddgst": false, 00:23:35.795 "multipath": "multipath" 00:23:35.795 } 00:23:35.795 }, 00:23:35.795 { 00:23:35.795 "method": "bdev_nvme_set_hotplug", 00:23:35.795 "params": { 00:23:35.795 "period_us": 100000, 00:23:35.795 "enable": false 00:23:35.795 } 00:23:35.795 }, 00:23:35.795 { 00:23:35.795 "method": "bdev_wait_for_examine" 00:23:35.795 } 00:23:35.795 ] 00:23:35.795 }, 00:23:35.795 { 00:23:35.795 "subsystem": "nbd", 00:23:35.795 "config": [] 00:23:35.795 } 00:23:35.795 ] 00:23:35.795 }' 00:23:35.795 09:51:23 keyring_file -- keyring/file.sh@115 -- # killprocess 85450 00:23:35.795 09:51:23 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85450 ']' 00:23:35.795 09:51:23 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85450 00:23:35.795 09:51:23 keyring_file -- common/autotest_common.sh@959 -- # uname 00:23:35.795 09:51:23 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.795 09:51:23 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85450 00:23:35.795 killing process with pid 85450 00:23:35.795 Received shutdown signal, test time was about 1.000000 seconds 00:23:35.795 00:23:35.795 Latency(us) 00:23:35.795 [2024-11-19T09:51:23.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.795 [2024-11-19T09:51:23.418Z] =================================================================================================================== 00:23:35.795 [2024-11-19T09:51:23.418Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.795 09:51:23 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:35.795 09:51:23 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:35.795 09:51:23 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85450' 00:23:35.795 09:51:23 keyring_file -- common/autotest_common.sh@973 -- # kill 85450 00:23:35.795 09:51:23 keyring_file -- common/autotest_common.sh@978 -- # wait 85450 00:23:36.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:36.055 09:51:23 keyring_file -- keyring/file.sh@118 -- # bperfpid=85698 00:23:36.055 09:51:23 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85698 /var/tmp/bperf.sock 00:23:36.055 09:51:23 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85698 ']' 00:23:36.055 09:51:23 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:36.055 09:51:23 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.055 09:51:23 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:23:36.055 "subsystems": [ 00:23:36.055 { 00:23:36.055 "subsystem": "keyring", 00:23:36.055 "config": [ 00:23:36.055 { 00:23:36.055 "method": "keyring_file_add_key", 00:23:36.055 "params": { 00:23:36.055 "name": "key0", 00:23:36.055 "path": "/tmp/tmp.gAuTNCgNTe" 00:23:36.055 } 00:23:36.055 }, 00:23:36.055 { 00:23:36.055 "method": "keyring_file_add_key", 00:23:36.055 "params": { 00:23:36.055 "name": "key1", 00:23:36.055 "path": "/tmp/tmp.impRhT2pM5" 00:23:36.055 } 00:23:36.055 } 00:23:36.055 ] 00:23:36.055 }, 00:23:36.055 { 00:23:36.055 "subsystem": "iobuf", 00:23:36.055 "config": [ 00:23:36.055 { 00:23:36.055 "method": "iobuf_set_options", 00:23:36.055 "params": { 00:23:36.055 "small_pool_count": 8192, 00:23:36.055 "large_pool_count": 1024, 00:23:36.055 "small_bufsize": 8192, 00:23:36.055 "large_bufsize": 135168, 00:23:36.055 "enable_numa": false 00:23:36.055 } 00:23:36.055 } 00:23:36.055 ] 00:23:36.055 }, 00:23:36.055 { 00:23:36.055 "subsystem": "sock", 00:23:36.055 "config": [ 00:23:36.055 { 00:23:36.055 "method": "sock_set_default_impl", 00:23:36.055 "params": { 00:23:36.055 "impl_name": "uring" 00:23:36.055 } 00:23:36.055 }, 00:23:36.055 { 00:23:36.055 "method": "sock_impl_set_options", 00:23:36.055 "params": { 00:23:36.055 "impl_name": "ssl", 00:23:36.055 "recv_buf_size": 4096, 00:23:36.055 "send_buf_size": 4096, 00:23:36.055 "enable_recv_pipe": true, 00:23:36.055 "enable_quickack": false, 00:23:36.055 "enable_placement_id": 0, 00:23:36.055 "enable_zerocopy_send_server": true, 00:23:36.055 "enable_zerocopy_send_client": false, 00:23:36.055 "zerocopy_threshold": 0, 00:23:36.055 "tls_version": 0, 00:23:36.055 "enable_ktls": false 00:23:36.055 } 00:23:36.055 }, 00:23:36.055 { 00:23:36.055 "method": "sock_impl_set_options", 00:23:36.055 "params": { 00:23:36.055 "impl_name": "posix", 00:23:36.055 "recv_buf_size": 2097152, 00:23:36.055 "send_buf_size": 2097152, 00:23:36.055 "enable_recv_pipe": true, 00:23:36.055 "enable_quickack": false, 00:23:36.055 "enable_placement_id": 0, 00:23:36.055 "enable_zerocopy_send_server": true, 00:23:36.055 "enable_zerocopy_send_client": false, 00:23:36.055 "zerocopy_threshold": 0, 00:23:36.055 "tls_version": 0, 00:23:36.055 "enable_ktls": false 00:23:36.055 } 00:23:36.055 }, 00:23:36.055 { 00:23:36.055 "method": "sock_impl_set_options", 00:23:36.055 "params": { 00:23:36.055 "impl_name": "uring", 00:23:36.055 "recv_buf_size": 2097152, 00:23:36.055 "send_buf_size": 2097152, 00:23:36.055 "enable_recv_pipe": true, 00:23:36.055 "enable_quickack": false, 00:23:36.055 "enable_placement_id": 0, 00:23:36.055 "enable_zerocopy_send_server": false, 00:23:36.055 "enable_zerocopy_send_client": false, 00:23:36.055 "zerocopy_threshold": 0, 00:23:36.055 "tls_version": 0, 00:23:36.055 "enable_ktls": false 00:23:36.055 } 00:23:36.055 } 00:23:36.055 ] 00:23:36.055 }, 00:23:36.055 { 00:23:36.055 "subsystem": "vmd", 00:23:36.055 "config": [] 00:23:36.055 }, 00:23:36.055 { 00:23:36.055 "subsystem": "accel", 00:23:36.055 "config": [ 00:23:36.055 { 00:23:36.055 "method": "accel_set_options", 00:23:36.055 "params": { 00:23:36.055 "small_cache_size": 128, 00:23:36.055 "large_cache_size": 16, 00:23:36.055 "task_count": 2048, 00:23:36.055 "sequence_count": 2048, 00:23:36.055 "buf_count": 2048 00:23:36.055 } 00:23:36.055 } 00:23:36.055 ] 00:23:36.055 }, 00:23:36.055 { 00:23:36.055 "subsystem": "bdev", 00:23:36.055 "config": [ 00:23:36.055 { 00:23:36.055 "method": "bdev_set_options", 00:23:36.055 "params": { 00:23:36.055 "bdev_io_pool_size": 65535, 00:23:36.055 "bdev_io_cache_size": 256, 00:23:36.055 "bdev_auto_examine": true, 00:23:36.055 "iobuf_small_cache_size": 128, 00:23:36.055 "iobuf_large_cache_size": 16 00:23:36.055 } 00:23:36.055 }, 00:23:36.055 { 00:23:36.055 "method": "bdev_raid_set_options", 00:23:36.055 "params": { 00:23:36.055 "process_window_size_kb": 1024, 00:23:36.055 "process_max_bandwidth_mb_sec": 0 00:23:36.055 } 00:23:36.055 }, 00:23:36.055 { 00:23:36.055 "method": "bdev_iscsi_set_options", 00:23:36.055 "params": { 00:23:36.055 "timeout_sec": 30 00:23:36.055 } 00:23:36.055 }, 00:23:36.055 { 00:23:36.055 "method": "bdev_nvme_set_options", 00:23:36.055 "params": { 00:23:36.055 "action_on_timeout": "none", 00:23:36.055 "timeout_us": 0, 00:23:36.055 "timeout_admin_us": 0, 00:23:36.055 "keep_alive_timeout_ms": 10000, 00:23:36.055 "arbitration_burst": 0, 00:23:36.055 "low_priority_weight": 0, 00:23:36.056 "medium_priority_weight": 0, 00:23:36.056 "high_priority_weight": 0, 00:23:36.056 "nvme_adminq_poll_period_us": 10000, 00:23:36.056 "nvme_io 09:51:23 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:36.056 q_poll_period_us": 0, 00:23:36.056 "io_queue_requests": 512, 00:23:36.056 "delay_cmd_submit": true, 00:23:36.056 "transport_retry_count": 4, 00:23:36.056 "bdev_retry_count": 3, 00:23:36.056 "transport_ack_timeout": 0, 00:23:36.056 "ctrlr_loss_timeout_sec": 0, 00:23:36.056 "reconnect_delay_sec": 0, 00:23:36.056 "fast_io_fail_timeout_sec": 0, 00:23:36.056 "disable_auto_failback": false, 00:23:36.056 "generate_uuids": false, 00:23:36.056 "transport_tos": 0, 00:23:36.056 "nvme_error_stat": false, 00:23:36.056 "rdma_srq_size": 0, 00:23:36.056 "io_path_stat": false, 00:23:36.056 "allow_accel_sequence": false, 00:23:36.056 "rdma_max_cq_size": 0, 00:23:36.056 "rdma_cm_event_timeout_ms": 0, 00:23:36.056 "dhchap_digests": [ 00:23:36.056 "sha256", 00:23:36.056 "sha384", 00:23:36.056 "sha512" 00:23:36.056 ], 00:23:36.056 "dhchap_dhgroups": [ 00:23:36.056 "null", 00:23:36.056 "ffdhe2048", 00:23:36.056 "ffdhe3072", 00:23:36.056 "ffdhe4096", 00:23:36.056 "ffdhe6144", 00:23:36.056 "ffdhe8192" 00:23:36.056 ] 00:23:36.056 } 00:23:36.056 }, 00:23:36.056 { 00:23:36.056 "method": "bdev_nvme_attach_controller", 00:23:36.056 "params": { 00:23:36.056 "name": "nvme0", 00:23:36.056 "trtype": "TCP", 00:23:36.056 "adrfam": "IPv4", 00:23:36.056 "traddr": "127.0.0.1", 00:23:36.056 "trsvcid": "4420", 00:23:36.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.056 "prchk_reftag": false, 00:23:36.056 "prchk_guard": false, 00:23:36.056 "ctrlr_loss_timeout_sec": 0, 00:23:36.056 "reconnect_delay_sec": 0, 00:23:36.056 "fast_io_fail_timeout_sec": 0, 00:23:36.056 "psk": "key0", 00:23:36.056 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:36.056 "hdgst": false, 00:23:36.056 "ddgst": false, 00:23:36.056 "multipath": "multipath" 00:23:36.056 } 00:23:36.056 }, 00:23:36.056 { 00:23:36.056 "method": "bdev_nvme_set_hotplug", 00:23:36.056 "params": { 00:23:36.056 "period_us": 100000, 00:23:36.056 "enable": false 00:23:36.056 } 00:23:36.056 }, 00:23:36.056 { 00:23:36.056 "method": "bdev_wait_for_examine" 00:23:36.056 } 00:23:36.056 ] 00:23:36.056 }, 00:23:36.056 { 00:23:36.056 "subsystem": "nbd", 00:23:36.056 "config": [] 00:23:36.056 } 00:23:36.056 ] 00:23:36.056 }' 00:23:36.056 09:51:23 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:36.056 09:51:23 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.056 09:51:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:36.056 [2024-11-19 09:51:23.479127] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:36.056 [2024-11-19 09:51:23.479220] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85698 ] 00:23:36.056 [2024-11-19 09:51:23.620113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.056 [2024-11-19 09:51:23.673536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.315 [2024-11-19 09:51:23.808891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:36.315 [2024-11-19 09:51:23.864824] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.883 09:51:24 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.883 09:51:24 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:23:36.883 09:51:24 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:23:36.883 09:51:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:36.883 09:51:24 keyring_file -- keyring/file.sh@121 -- # jq length 00:23:37.141 09:51:24 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:37.141 09:51:24 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:23:37.141 09:51:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:37.141 09:51:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:37.141 09:51:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:37.141 09:51:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:37.141 09:51:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:37.708 09:51:25 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:23:37.708 09:51:25 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:23:37.708 09:51:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:37.708 09:51:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:37.708 09:51:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:37.708 09:51:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:37.708 09:51:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:37.967 09:51:25 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:23:37.967 09:51:25 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:23:37.967 09:51:25 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:23:37.967 09:51:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:37.967 09:51:25 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:23:37.967 09:51:25 keyring_file -- keyring/file.sh@1 -- # cleanup 00:23:37.967 09:51:25 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.gAuTNCgNTe /tmp/tmp.impRhT2pM5 00:23:37.967 09:51:25 keyring_file -- keyring/file.sh@20 -- # killprocess 85698 00:23:37.967 09:51:25 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85698 ']' 00:23:37.967 09:51:25 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85698 00:23:37.967 09:51:25 keyring_file -- common/autotest_common.sh@959 -- # uname 00:23:38.226 09:51:25 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.226 09:51:25 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85698 00:23:38.226 09:51:25 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:38.226 09:51:25 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:38.226 killing process with pid 85698 00:23:38.226 09:51:25 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85698' 00:23:38.226 09:51:25 keyring_file -- common/autotest_common.sh@973 -- # kill 85698 00:23:38.226 Received shutdown signal, test time was about 1.000000 seconds 00:23:38.226 00:23:38.226 Latency(us) 00:23:38.226 [2024-11-19T09:51:25.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.226 [2024-11-19T09:51:25.849Z] =================================================================================================================== 00:23:38.226 [2024-11-19T09:51:25.849Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:38.226 09:51:25 keyring_file -- common/autotest_common.sh@978 -- # wait 85698 00:23:38.226 09:51:25 keyring_file -- keyring/file.sh@21 -- # killprocess 85440 00:23:38.226 09:51:25 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85440 ']' 00:23:38.226 09:51:25 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85440 00:23:38.227 09:51:25 keyring_file -- common/autotest_common.sh@959 -- # uname 00:23:38.227 09:51:25 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.227 09:51:25 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85440 00:23:38.227 09:51:25 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:38.227 09:51:25 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:38.227 killing process with pid 85440 00:23:38.227 09:51:25 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85440' 00:23:38.227 09:51:25 keyring_file -- common/autotest_common.sh@973 -- # kill 85440 00:23:38.227 09:51:25 keyring_file -- common/autotest_common.sh@978 -- # wait 85440 00:23:38.794 00:23:38.794 real 0m15.780s 00:23:38.794 user 0m40.286s 00:23:38.794 sys 0m2.979s 00:23:38.794 09:51:26 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.794 09:51:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:38.794 ************************************ 00:23:38.794 END TEST keyring_file 00:23:38.794 ************************************ 00:23:38.794 09:51:26 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:23:38.794 09:51:26 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:38.794 09:51:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:38.794 09:51:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:38.794 09:51:26 -- common/autotest_common.sh@10 -- # set +x 00:23:38.794 ************************************ 00:23:38.794 START TEST keyring_linux 00:23:38.794 ************************************ 00:23:38.795 09:51:26 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:38.795 Joined session keyring: 871196646 00:23:38.795 * Looking for test storage... 00:23:38.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:38.795 09:51:26 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:38.795 09:51:26 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:23:38.795 09:51:26 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:39.054 09:51:26 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@345 -- # : 1 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.054 09:51:26 keyring_linux -- scripts/common.sh@368 -- # return 0 00:23:39.054 09:51:26 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.054 09:51:26 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:39.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.054 --rc genhtml_branch_coverage=1 00:23:39.054 --rc genhtml_function_coverage=1 00:23:39.054 --rc genhtml_legend=1 00:23:39.054 --rc geninfo_all_blocks=1 00:23:39.054 --rc geninfo_unexecuted_blocks=1 00:23:39.054 00:23:39.054 ' 00:23:39.054 09:51:26 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:39.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.054 --rc genhtml_branch_coverage=1 00:23:39.054 --rc genhtml_function_coverage=1 00:23:39.054 --rc genhtml_legend=1 00:23:39.054 --rc geninfo_all_blocks=1 00:23:39.054 --rc geninfo_unexecuted_blocks=1 00:23:39.054 00:23:39.054 ' 00:23:39.054 09:51:26 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:39.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.054 --rc genhtml_branch_coverage=1 00:23:39.054 --rc genhtml_function_coverage=1 00:23:39.054 --rc genhtml_legend=1 00:23:39.054 --rc geninfo_all_blocks=1 00:23:39.054 --rc geninfo_unexecuted_blocks=1 00:23:39.054 00:23:39.054 ' 00:23:39.054 09:51:26 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:39.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.054 --rc genhtml_branch_coverage=1 00:23:39.054 --rc genhtml_function_coverage=1 00:23:39.054 --rc genhtml_legend=1 00:23:39.054 --rc geninfo_all_blocks=1 00:23:39.054 --rc geninfo_unexecuted_blocks=1 00:23:39.054 00:23:39.054 ' 00:23:39.054 09:51:26 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:39.054 09:51:26 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:39.054 09:51:26 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:23:39.054 09:51:26 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.054 09:51:26 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9203ba0c-8506-4f0b-a886-a7f874c4694c 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=9203ba0c-8506-4f0b-a886-a7f874c4694c 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:39.055 09:51:26 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:23:39.055 09:51:26 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.055 09:51:26 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.055 09:51:26 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.055 09:51:26 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.055 09:51:26 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.055 09:51:26 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.055 09:51:26 keyring_linux -- paths/export.sh@5 -- # export PATH 00:23:39.055 09:51:26 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:39.055 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:39.055 09:51:26 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:39.055 09:51:26 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:39.055 09:51:26 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:23:39.055 09:51:26 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:23:39.055 09:51:26 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:23:39.055 09:51:26 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@733 -- # python - 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:23:39.055 /tmp/:spdk-test:key0 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:23:39.055 09:51:26 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:23:39.055 09:51:26 keyring_linux -- nvmf/common.sh@733 -- # python - 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:23:39.055 /tmp/:spdk-test:key1 00:23:39.055 09:51:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:23:39.055 09:51:26 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85824 00:23:39.055 09:51:26 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:39.055 09:51:26 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85824 00:23:39.055 09:51:26 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85824 ']' 00:23:39.055 09:51:26 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.055 09:51:26 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.055 09:51:26 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.055 09:51:26 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.055 09:51:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:39.055 [2024-11-19 09:51:26.670740] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:39.055 [2024-11-19 09:51:26.670842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85824 ] 00:23:39.314 [2024-11-19 09:51:26.825022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.314 [2024-11-19 09:51:26.888187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.573 [2024-11-19 09:51:26.966485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:40.140 09:51:27 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.140 09:51:27 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:23:40.140 09:51:27 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:23:40.140 09:51:27 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.140 09:51:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:40.140 [2024-11-19 09:51:27.658030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.140 null0 00:23:40.140 [2024-11-19 09:51:27.690027] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.140 [2024-11-19 09:51:27.690254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:40.140 09:51:27 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.140 09:51:27 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:23:40.140 661915308 00:23:40.140 09:51:27 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:23:40.140 943351203 00:23:40.140 09:51:27 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85843 00:23:40.140 09:51:27 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:23:40.140 09:51:27 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85843 /var/tmp/bperf.sock 00:23:40.140 09:51:27 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85843 ']' 00:23:40.140 09:51:27 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:40.140 09:51:27 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:40.140 09:51:27 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:40.140 09:51:27 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.140 09:51:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:40.399 [2024-11-19 09:51:27.769443] Starting SPDK v25.01-pre git sha1 53ca6a885 / DPDK 24.03.0 initialization... 00:23:40.399 [2024-11-19 09:51:27.769540] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85843 ] 00:23:40.399 [2024-11-19 09:51:27.918945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.399 [2024-11-19 09:51:27.982314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.399 09:51:28 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.399 09:51:28 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:23:40.399 09:51:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:23:40.399 09:51:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:23:40.658 09:51:28 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:23:40.658 09:51:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:41.224 [2024-11-19 09:51:28.553539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:41.224 09:51:28 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:41.224 09:51:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:41.224 [2024-11-19 09:51:28.843934] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.483 nvme0n1 00:23:41.483 09:51:28 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:23:41.483 09:51:28 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:23:41.483 09:51:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:41.483 09:51:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:41.483 09:51:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:41.483 09:51:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:41.742 09:51:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:23:41.742 09:51:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:41.742 09:51:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:23:41.742 09:51:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:23:41.742 09:51:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:41.743 09:51:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:41.743 09:51:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:23:42.003 09:51:29 keyring_linux -- keyring/linux.sh@25 -- # sn=661915308 00:23:42.003 09:51:29 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:23:42.003 09:51:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:42.003 09:51:29 keyring_linux -- keyring/linux.sh@26 -- # [[ 661915308 == \6\6\1\9\1\5\3\0\8 ]] 00:23:42.003 09:51:29 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 661915308 00:23:42.003 09:51:29 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:23:42.003 09:51:29 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:42.295 Running I/O for 1 seconds... 00:23:43.231 10072.00 IOPS, 39.34 MiB/s 00:23:43.231 Latency(us) 00:23:43.231 [2024-11-19T09:51:30.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.231 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:43.231 nvme0n1 : 1.01 10113.51 39.51 0.00 0.00 12604.07 5213.09 20137.43 00:23:43.231 [2024-11-19T09:51:30.854Z] =================================================================================================================== 00:23:43.231 [2024-11-19T09:51:30.854Z] Total : 10113.51 39.51 0.00 0.00 12604.07 5213.09 20137.43 00:23:43.231 { 00:23:43.231 "results": [ 00:23:43.231 { 00:23:43.231 "job": "nvme0n1", 00:23:43.231 "core_mask": "0x2", 00:23:43.231 "workload": "randread", 00:23:43.231 "status": "finished", 00:23:43.231 "queue_depth": 128, 00:23:43.231 "io_size": 4096, 00:23:43.231 "runtime": 1.008651, 00:23:43.231 "iops": 10113.50804192927, 00:23:43.231 "mibps": 39.50589078878621, 00:23:43.231 "io_failed": 0, 00:23:43.231 "io_timeout": 0, 00:23:43.231 "avg_latency_us": 12604.066313641264, 00:23:43.231 "min_latency_us": 5213.090909090909, 00:23:43.231 "max_latency_us": 20137.425454545453 00:23:43.231 } 00:23:43.231 ], 00:23:43.231 "core_count": 1 00:23:43.231 } 00:23:43.231 09:51:30 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:43.231 09:51:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:43.490 09:51:30 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:23:43.490 09:51:30 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:23:43.490 09:51:30 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:43.490 09:51:30 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:43.490 09:51:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:43.490 09:51:30 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:43.749 09:51:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:23:43.749 09:51:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:43.749 09:51:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:23:43.749 09:51:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:43.749 09:51:31 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:23:43.749 09:51:31 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:43.749 09:51:31 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:23:43.749 09:51:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.749 09:51:31 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:23:43.749 09:51:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.749 09:51:31 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:43.749 09:51:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:44.008 [2024-11-19 09:51:31.515457] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:44.008 [2024-11-19 09:51:31.516434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc75d0 (107): Transport endpoint is not connected 00:23:44.008 [2024-11-19 09:51:31.517429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc75d0 (9): Bad file descriptor 00:23:44.008 [2024-11-19 09:51:31.518426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:23:44.008 [2024-11-19 09:51:31.518486] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:44.008 [2024-11-19 09:51:31.518501] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:23:44.008 [2024-11-19 09:51:31.518515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:23:44.008 request: 00:23:44.008 { 00:23:44.008 "name": "nvme0", 00:23:44.008 "trtype": "tcp", 00:23:44.008 "traddr": "127.0.0.1", 00:23:44.008 "adrfam": "ipv4", 00:23:44.008 "trsvcid": "4420", 00:23:44.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:44.008 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:44.008 "prchk_reftag": false, 00:23:44.008 "prchk_guard": false, 00:23:44.008 "hdgst": false, 00:23:44.008 "ddgst": false, 00:23:44.008 "psk": ":spdk-test:key1", 00:23:44.008 "allow_unrecognized_csi": false, 00:23:44.008 "method": "bdev_nvme_attach_controller", 00:23:44.008 "req_id": 1 00:23:44.008 } 00:23:44.008 Got JSON-RPC error response 00:23:44.008 response: 00:23:44.008 { 00:23:44.008 "code": -5, 00:23:44.008 "message": "Input/output error" 00:23:44.008 } 00:23:44.008 09:51:31 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:23:44.008 09:51:31 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:44.008 09:51:31 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:44.008 09:51:31 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@33 -- # sn=661915308 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 661915308 00:23:44.008 1 links removed 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@33 -- # sn=943351203 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 943351203 00:23:44.008 1 links removed 00:23:44.008 09:51:31 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85843 00:23:44.008 09:51:31 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85843 ']' 00:23:44.008 09:51:31 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85843 00:23:44.008 09:51:31 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:23:44.008 09:51:31 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.008 09:51:31 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85843 00:23:44.008 killing process with pid 85843 00:23:44.008 Received shutdown signal, test time was about 1.000000 seconds 00:23:44.008 00:23:44.008 Latency(us) 00:23:44.008 [2024-11-19T09:51:31.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.008 [2024-11-19T09:51:31.631Z] =================================================================================================================== 00:23:44.008 [2024-11-19T09:51:31.631Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.008 09:51:31 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:44.008 09:51:31 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:44.008 09:51:31 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85843' 00:23:44.008 09:51:31 keyring_linux -- common/autotest_common.sh@973 -- # kill 85843 00:23:44.008 09:51:31 keyring_linux -- common/autotest_common.sh@978 -- # wait 85843 00:23:44.267 09:51:31 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85824 00:23:44.267 09:51:31 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85824 ']' 00:23:44.267 09:51:31 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85824 00:23:44.267 09:51:31 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:23:44.267 09:51:31 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.267 09:51:31 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85824 00:23:44.267 killing process with pid 85824 00:23:44.267 09:51:31 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:44.267 09:51:31 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:44.267 09:51:31 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85824' 00:23:44.267 09:51:31 keyring_linux -- common/autotest_common.sh@973 -- # kill 85824 00:23:44.267 09:51:31 keyring_linux -- common/autotest_common.sh@978 -- # wait 85824 00:23:44.835 00:23:44.835 real 0m6.013s 00:23:44.835 user 0m11.431s 00:23:44.835 sys 0m1.634s 00:23:44.835 ************************************ 00:23:44.835 END TEST keyring_linux 00:23:44.835 ************************************ 00:23:44.835 09:51:32 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.835 09:51:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:44.835 09:51:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:44.835 09:51:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:44.835 09:51:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:44.835 09:51:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:44.835 09:51:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:44.835 09:51:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:44.835 09:51:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:44.835 09:51:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:44.835 09:51:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:44.835 09:51:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:23:44.835 09:51:32 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:23:44.835 09:51:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:23:44.835 09:51:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:23:44.835 09:51:32 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:23:44.835 09:51:32 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:23:44.835 09:51:32 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:23:44.835 09:51:32 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:23:44.835 09:51:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.835 09:51:32 -- common/autotest_common.sh@10 -- # set +x 00:23:44.835 09:51:32 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:23:44.835 09:51:32 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:23:44.835 09:51:32 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:23:44.835 09:51:32 -- common/autotest_common.sh@10 -- # set +x 00:23:46.738 INFO: APP EXITING 00:23:46.738 INFO: killing all VMs 00:23:46.738 INFO: killing vhost app 00:23:46.738 INFO: EXIT DONE 00:23:47.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:47.306 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:47.306 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:48.243 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:48.243 Cleaning 00:23:48.243 Removing: /var/run/dpdk/spdk0/config 00:23:48.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:48.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:48.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:48.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:48.243 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:48.243 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:48.243 Removing: /var/run/dpdk/spdk1/config 00:23:48.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:48.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:48.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:48.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:48.243 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:48.243 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:48.243 Removing: /var/run/dpdk/spdk2/config 00:23:48.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:48.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:48.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:48.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:48.243 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:48.243 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:48.243 Removing: /var/run/dpdk/spdk3/config 00:23:48.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:48.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:48.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:48.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:48.243 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:48.243 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:48.243 Removing: /var/run/dpdk/spdk4/config 00:23:48.243 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:48.243 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:48.243 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:48.243 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:48.243 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:48.244 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:48.244 Removing: /dev/shm/nvmf_trace.0 00:23:48.244 Removing: /dev/shm/spdk_tgt_trace.pid56772 00:23:48.244 Removing: /var/run/dpdk/spdk0 00:23:48.244 Removing: /var/run/dpdk/spdk1 00:23:48.244 Removing: /var/run/dpdk/spdk2 00:23:48.244 Removing: /var/run/dpdk/spdk3 00:23:48.244 Removing: /var/run/dpdk/spdk4 00:23:48.244 Removing: /var/run/dpdk/spdk_pid56613 00:23:48.244 Removing: /var/run/dpdk/spdk_pid56772 00:23:48.244 Removing: /var/run/dpdk/spdk_pid56965 00:23:48.244 Removing: /var/run/dpdk/spdk_pid57051 00:23:48.244 Removing: /var/run/dpdk/spdk_pid57079 00:23:48.244 Removing: /var/run/dpdk/spdk_pid57194 00:23:48.244 Removing: /var/run/dpdk/spdk_pid57212 00:23:48.244 Removing: /var/run/dpdk/spdk_pid57346 00:23:48.244 Removing: /var/run/dpdk/spdk_pid57549 00:23:48.244 Removing: /var/run/dpdk/spdk_pid57703 00:23:48.244 Removing: /var/run/dpdk/spdk_pid57781 00:23:48.244 Removing: /var/run/dpdk/spdk_pid57852 00:23:48.244 Removing: /var/run/dpdk/spdk_pid57949 00:23:48.244 Removing: /var/run/dpdk/spdk_pid58035 00:23:48.244 Removing: /var/run/dpdk/spdk_pid58068 00:23:48.244 Removing: /var/run/dpdk/spdk_pid58098 00:23:48.244 Removing: /var/run/dpdk/spdk_pid58173 00:23:48.244 Removing: /var/run/dpdk/spdk_pid58256 00:23:48.244 Removing: /var/run/dpdk/spdk_pid58706 00:23:48.244 Removing: /var/run/dpdk/spdk_pid58752 00:23:48.244 Removing: /var/run/dpdk/spdk_pid58796 00:23:48.244 Removing: /var/run/dpdk/spdk_pid58812 00:23:48.244 Removing: /var/run/dpdk/spdk_pid58879 00:23:48.244 Removing: /var/run/dpdk/spdk_pid58893 00:23:48.244 Removing: /var/run/dpdk/spdk_pid58960 00:23:48.244 Removing: /var/run/dpdk/spdk_pid58976 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59027 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59032 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59076 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59088 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59224 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59254 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59342 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59663 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59685 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59717 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59727 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59746 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59765 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59784 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59800 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59824 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59832 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59853 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59872 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59891 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59901 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59926 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59939 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59959 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59981 00:23:48.244 Removing: /var/run/dpdk/spdk_pid59995 00:23:48.244 Removing: /var/run/dpdk/spdk_pid60010 00:23:48.244 Removing: /var/run/dpdk/spdk_pid60045 00:23:48.244 Removing: /var/run/dpdk/spdk_pid60060 00:23:48.244 Removing: /var/run/dpdk/spdk_pid60090 00:23:48.244 Removing: /var/run/dpdk/spdk_pid60162 00:23:48.244 Removing: /var/run/dpdk/spdk_pid60191 00:23:48.244 Removing: /var/run/dpdk/spdk_pid60200 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60229 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60238 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60246 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60290 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60304 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60338 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60347 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60357 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60366 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60376 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60385 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60395 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60404 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60433 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60459 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60471 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60505 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60509 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60522 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60557 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60574 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60600 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60608 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60621 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60623 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60636 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60638 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60651 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60653 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60735 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60788 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60906 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60944 00:23:48.503 Removing: /var/run/dpdk/spdk_pid60988 00:23:48.503 Removing: /var/run/dpdk/spdk_pid61002 00:23:48.503 Removing: /var/run/dpdk/spdk_pid61024 00:23:48.503 Removing: /var/run/dpdk/spdk_pid61039 00:23:48.503 Removing: /var/run/dpdk/spdk_pid61076 00:23:48.503 Removing: /var/run/dpdk/spdk_pid61091 00:23:48.503 Removing: /var/run/dpdk/spdk_pid61169 00:23:48.503 Removing: /var/run/dpdk/spdk_pid61191 00:23:48.503 Removing: /var/run/dpdk/spdk_pid61235 00:23:48.503 Removing: /var/run/dpdk/spdk_pid61302 00:23:48.503 Removing: /var/run/dpdk/spdk_pid61361 00:23:48.503 Removing: /var/run/dpdk/spdk_pid61391 00:23:48.504 Removing: /var/run/dpdk/spdk_pid61486 00:23:48.504 Removing: /var/run/dpdk/spdk_pid61534 00:23:48.504 Removing: /var/run/dpdk/spdk_pid61572 00:23:48.504 Removing: /var/run/dpdk/spdk_pid61793 00:23:48.504 Removing: /var/run/dpdk/spdk_pid61896 00:23:48.504 Removing: /var/run/dpdk/spdk_pid61919 00:23:48.504 Removing: /var/run/dpdk/spdk_pid61954 00:23:48.504 Removing: /var/run/dpdk/spdk_pid61982 00:23:48.504 Removing: /var/run/dpdk/spdk_pid62021 00:23:48.504 Removing: /var/run/dpdk/spdk_pid62059 00:23:48.504 Removing: /var/run/dpdk/spdk_pid62086 00:23:48.504 Removing: /var/run/dpdk/spdk_pid62475 00:23:48.504 Removing: /var/run/dpdk/spdk_pid62514 00:23:48.504 Removing: /var/run/dpdk/spdk_pid62854 00:23:48.504 Removing: /var/run/dpdk/spdk_pid63326 00:23:48.504 Removing: /var/run/dpdk/spdk_pid63608 00:23:48.504 Removing: /var/run/dpdk/spdk_pid64459 00:23:48.504 Removing: /var/run/dpdk/spdk_pid65366 00:23:48.504 Removing: /var/run/dpdk/spdk_pid65489 00:23:48.504 Removing: /var/run/dpdk/spdk_pid65551 00:23:48.504 Removing: /var/run/dpdk/spdk_pid66974 00:23:48.504 Removing: /var/run/dpdk/spdk_pid67287 00:23:48.504 Removing: /var/run/dpdk/spdk_pid71099 00:23:48.504 Removing: /var/run/dpdk/spdk_pid71468 00:23:48.504 Removing: /var/run/dpdk/spdk_pid71578 00:23:48.504 Removing: /var/run/dpdk/spdk_pid71705 00:23:48.504 Removing: /var/run/dpdk/spdk_pid71726 00:23:48.504 Removing: /var/run/dpdk/spdk_pid71747 00:23:48.504 Removing: /var/run/dpdk/spdk_pid71774 00:23:48.504 Removing: /var/run/dpdk/spdk_pid71866 00:23:48.504 Removing: /var/run/dpdk/spdk_pid72006 00:23:48.504 Removing: /var/run/dpdk/spdk_pid72157 00:23:48.504 Removing: /var/run/dpdk/spdk_pid72239 00:23:48.504 Removing: /var/run/dpdk/spdk_pid72430 00:23:48.504 Removing: /var/run/dpdk/spdk_pid72502 00:23:48.504 Removing: /var/run/dpdk/spdk_pid72593 00:23:48.504 Removing: /var/run/dpdk/spdk_pid72950 00:23:48.504 Removing: /var/run/dpdk/spdk_pid73347 00:23:48.504 Removing: /var/run/dpdk/spdk_pid73348 00:23:48.504 Removing: /var/run/dpdk/spdk_pid73349 00:23:48.504 Removing: /var/run/dpdk/spdk_pid73619 00:23:48.504 Removing: /var/run/dpdk/spdk_pid73884 00:23:48.504 Removing: /var/run/dpdk/spdk_pid74281 00:23:48.504 Removing: /var/run/dpdk/spdk_pid74287 00:23:48.763 Removing: /var/run/dpdk/spdk_pid74610 00:23:48.763 Removing: /var/run/dpdk/spdk_pid74626 00:23:48.763 Removing: /var/run/dpdk/spdk_pid74640 00:23:48.763 Removing: /var/run/dpdk/spdk_pid74671 00:23:48.763 Removing: /var/run/dpdk/spdk_pid74676 00:23:48.763 Removing: /var/run/dpdk/spdk_pid75035 00:23:48.763 Removing: /var/run/dpdk/spdk_pid75084 00:23:48.763 Removing: /var/run/dpdk/spdk_pid75406 00:23:48.763 Removing: /var/run/dpdk/spdk_pid75597 00:23:48.763 Removing: /var/run/dpdk/spdk_pid76022 00:23:48.764 Removing: /var/run/dpdk/spdk_pid76572 00:23:48.764 Removing: /var/run/dpdk/spdk_pid77466 00:23:48.764 Removing: /var/run/dpdk/spdk_pid78103 00:23:48.764 Removing: /var/run/dpdk/spdk_pid78105 00:23:48.764 Removing: /var/run/dpdk/spdk_pid80138 00:23:48.764 Removing: /var/run/dpdk/spdk_pid80191 00:23:48.764 Removing: /var/run/dpdk/spdk_pid80242 00:23:48.764 Removing: /var/run/dpdk/spdk_pid80296 00:23:48.764 Removing: /var/run/dpdk/spdk_pid80405 00:23:48.764 Removing: /var/run/dpdk/spdk_pid80471 00:23:48.764 Removing: /var/run/dpdk/spdk_pid80530 00:23:48.764 Removing: /var/run/dpdk/spdk_pid80586 00:23:48.764 Removing: /var/run/dpdk/spdk_pid80960 00:23:48.764 Removing: /var/run/dpdk/spdk_pid82178 00:23:48.764 Removing: /var/run/dpdk/spdk_pid82317 00:23:48.764 Removing: /var/run/dpdk/spdk_pid82555 00:23:48.764 Removing: /var/run/dpdk/spdk_pid83166 00:23:48.764 Removing: /var/run/dpdk/spdk_pid83326 00:23:48.764 Removing: /var/run/dpdk/spdk_pid83483 00:23:48.764 Removing: /var/run/dpdk/spdk_pid83580 00:23:48.764 Removing: /var/run/dpdk/spdk_pid83749 00:23:48.764 Removing: /var/run/dpdk/spdk_pid83858 00:23:48.764 Removing: /var/run/dpdk/spdk_pid84577 00:23:48.764 Removing: /var/run/dpdk/spdk_pid84615 00:23:48.764 Removing: /var/run/dpdk/spdk_pid84650 00:23:48.764 Removing: /var/run/dpdk/spdk_pid84906 00:23:48.764 Removing: /var/run/dpdk/spdk_pid84941 00:23:48.764 Removing: /var/run/dpdk/spdk_pid84971 00:23:48.764 Removing: /var/run/dpdk/spdk_pid85440 00:23:48.764 Removing: /var/run/dpdk/spdk_pid85450 00:23:48.764 Removing: /var/run/dpdk/spdk_pid85698 00:23:48.764 Removing: /var/run/dpdk/spdk_pid85824 00:23:48.764 Removing: /var/run/dpdk/spdk_pid85843 00:23:48.764 Clean 00:23:48.764 09:51:36 -- common/autotest_common.sh@1453 -- # return 0 00:23:48.764 09:51:36 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:23:48.764 09:51:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:48.764 09:51:36 -- common/autotest_common.sh@10 -- # set +x 00:23:48.764 09:51:36 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:23:48.764 09:51:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:48.764 09:51:36 -- common/autotest_common.sh@10 -- # set +x 00:23:48.764 09:51:36 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:48.764 09:51:36 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:48.764 09:51:36 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:49.023 09:51:36 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:23:49.023 09:51:36 -- spdk/autotest.sh@398 -- # hostname 00:23:49.023 09:51:36 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:49.023 geninfo: WARNING: invalid characters removed from testname! 00:24:15.591 09:52:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:18.876 09:52:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:21.406 09:52:08 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:23.937 09:52:11 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:27.223 09:52:14 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:29.756 09:52:17 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:32.292 09:52:19 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:32.292 09:52:19 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:32.292 09:52:19 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:24:32.292 09:52:19 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:32.292 09:52:19 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:32.292 09:52:19 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:32.292 + [[ -n 5201 ]] 00:24:32.292 + sudo kill 5201 00:24:32.302 [Pipeline] } 00:24:32.319 [Pipeline] // timeout 00:24:32.324 [Pipeline] } 00:24:32.340 [Pipeline] // stage 00:24:32.345 [Pipeline] } 00:24:32.358 [Pipeline] // catchError 00:24:32.368 [Pipeline] stage 00:24:32.370 [Pipeline] { (Stop VM) 00:24:32.384 [Pipeline] sh 00:24:32.666 + vagrant halt 00:24:35.950 ==> default: Halting domain... 00:24:42.527 [Pipeline] sh 00:24:42.806 + vagrant destroy -f 00:24:46.998 ==> default: Removing domain... 00:24:47.011 [Pipeline] sh 00:24:47.292 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:24:47.301 [Pipeline] } 00:24:47.312 [Pipeline] // stage 00:24:47.318 [Pipeline] } 00:24:47.333 [Pipeline] // dir 00:24:47.338 [Pipeline] } 00:24:47.352 [Pipeline] // wrap 00:24:47.358 [Pipeline] } 00:24:47.371 [Pipeline] // catchError 00:24:47.380 [Pipeline] stage 00:24:47.381 [Pipeline] { (Epilogue) 00:24:47.393 [Pipeline] sh 00:24:47.677 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:54.251 [Pipeline] catchError 00:24:54.253 [Pipeline] { 00:24:54.269 [Pipeline] sh 00:24:54.550 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:54.550 Artifacts sizes are good 00:24:54.560 [Pipeline] } 00:24:54.575 [Pipeline] // catchError 00:24:54.587 [Pipeline] archiveArtifacts 00:24:54.594 Archiving artifacts 00:24:54.716 [Pipeline] cleanWs 00:24:54.728 [WS-CLEANUP] Deleting project workspace... 00:24:54.728 [WS-CLEANUP] Deferred wipeout is used... 00:24:54.734 [WS-CLEANUP] done 00:24:54.736 [Pipeline] } 00:24:54.752 [Pipeline] // stage 00:24:54.757 [Pipeline] } 00:24:54.772 [Pipeline] // node 00:24:54.779 [Pipeline] End of Pipeline 00:24:54.832 Finished: SUCCESS